Friday, March 2, 2018

Attack Mapping-Information about Intermediaries

Résultat de recherche d'images pour "‫عملية ااختراق‬‎"

As part of mapping infrastructure it is important to identify any mediators like virtual servers, load balancer, proxies or firewalls because the existence of such components in the targeted victim environment might derive totally different attack approach.

 The following examples explain main practices used to identify such intermediaries:

Detecting load balancers: - Surrounding IP scan - Detecting unsynchronized time stamp - detecting different (last modified or Etag) header for the same resource - Existence of unusual cookies. - Different SSL certificate

Detecting load balancers:
- Surrounding IP scan
- Detecting unsynchronized time stamp
- detecting different (last modified or Etag) header for the same resource
- Existence of unusual cookies.
- Different SSL certificate

Detecting Proxies:
- Using Trace command that echo the exact request and detect changes.
- Standard connect test
- Standard proxy request 

Mapping Application

To Map the application functionality, contents and workflow attacker can use many methods and apply it through different tools.  

Mapping functionalities and contents: 

Web application crawling: 

using special software that automate the generation of http requests attacker can capture the returned results and recursively auto extract included links, forms and even included client side script in the purpose of building a Skelton for the web site functionalities and contents. An example about a tool that help to spidering a site is Burp suite, the fully automated approach might not be the best solution to get a good picture about the functionalities and contents of the application due to the fact that automated solutions might not be able to capture links included in complicated Java Scripts or compiled client code like flash or java applet. 

From the other hand the multilevel input validation techniques used by modern application prevent spidering applications from bypassing successive levels with randomly generated contents. Another issue also is related to URL based seeding used by the spidering application as the later tend to remove repeated successive URL to prevent an infinite loop like when having a single URL usage for multiple action http://myBank/manage.php or conversely being locked in with same URL that uses a time stamp as parameters.

User Guided spidering: 

An alternative (or complementary) to the usage of auto crawling is the usage of user driven spidering where user manually explore the different application functionalities including the entry of forms information. 

In that type of spidering the spidering software logs user input and result returned by the explored application. the used tool work as a Proxy/spider that intercept all requests and responses. In this approach the user can guarantee that session is active and all the entered information fulfill the expected human interaction rules.

Hidden content spidering: 

Accessing the main stream contents mainly does not provide fast and delicious bite of information, accessing archived contents, backups, test files, source files, comments gives lot of information and maybe some easy to exploit vulnerabilities. This type of content can be discovered by inferencing from published contents or using a brute force approach that test destinations based on directory of common words like common folders and service names, an example about that will be:  

If a published destination content were found on address like:
http://theSiteName.com/stable/en/about
It will be a good idea to test addresses like
http://theSiteName.com/archived/en/about
http://theSiteName.com/development/en/about
http://theSiteName.com/old/en/about 

As example adding Robots.txt to your brute force directory might end with being able to get this file if existed which will provide a very good source for information as attacker might be able to map special folders or file depending on indexing rules set in that file. If the file contains the (Disallow: /something) rule this will tell for sure that (something) might contains a sensitive contents or refers to administrative page that administrator does not want it to be index.

  








  









0 comments: