CGISecurity Logo

Crawling Ajax-driven Web 2.0 Applications

Who cares? writes " Crawling web applications is one of
the key phases of automated web application scanning. The objective of
crawling is to collect all possible resources from the server in order
to automate vulnerability detection on each of these resource
s. A resource that is overlooked during this discovery phase can mean a
failure to detect some vulnerabilities. The introduction of Ajax throws up new challenges for the crawling engine. New
ways of handling the crawling process are required as a result of these
challenges. The objective of this paper is to use a practical approach
to address this issue using r
bNarcissus, Watir and Ruby.
"

Article Link: http://www.net-security.org/article.php?id=973