Recommended reading:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| <html>
<head>
<!-- This stuff in the header has nothing to do with the level -->
<link rel="stylesheet" type="text/css" href="http://natas.labs.overthewire.org/css/level.css">
<link rel="stylesheet" href="http://natas.labs.overthewire.org/css/jquery-ui.css" />
<link rel="stylesheet" href="http://natas.labs.overthewire.org/css/wechall.css" />
<script src="http://natas.labs.overthewire.org/js/jquery-1.9.1.js"></script>
<script src="http://natas.labs.overthewire.org/js/jquery-ui.js"></script>
<script src=http://natas.labs.overthewire.org/js/wechall-data.js></script><script src="http://natas.labs.overthewire.org/js/wechall.js"></script>
<script>var wechallinfo = { "level": "natas3", "pass": "sJIJNW6ucpu6HPZ1ZAchaDtwd7oGrD14" };</script></head>
<body>
<h1>natas3</h1>
<div id="content">
There is nothing on this page
<!-- No more information leaks!! Not even Google will find it this time... -->
</div>
</body></html>
|
On this challenge there's nothing that directly connects the page to what we're after, all we have to begin with is that comment on line 15. It suggests something about google (not) finding stuff. What they're talking about is a file named
robots.txt. I'll leave it to you to find it's location. Let's see its content:
1
2
| User-agent: *
Disallow: /s3cr3t/
|
Short and simple, it says that all robots should ignore the /s3cr3t/ directory. Now, taking advantage of directory listings like we did on the
previous challenge, we can head to said directory and find yet another
users.txt, that only contains the user and password for the next challenge:
User |
natas4 |
Password |
Z9tkRkWmpt9Qr7XrR5jWRkgOU901swEZ |
We're starting to put together the skills we've acquired in the previous challenges to achieve our goal, this challenge added the
robots.txt file, that together with viewing the source code and the knowledge of directory listings, allowed us to get the so desired password.
Never Settle,
No comments:
Post a Comment