Nowadays trying to violate the security of a system has become quite easy.
Today I received the visit of a colleague of mine, working on the IT security staff. He asked me if I could give an eye to a C program, and I obviously accepted; big was my astonishment when I realized that that “innocent code” is nothing less than a “stealth code” allowing the attacker to enter a Linux system, taking ownership of the root credentials and hiding his actions. But still there was something not clear in it: the code was extremely well commented, tidy, apparently written for didactic purposes only. “Who gave you this program?” I suddenly asked my friend. He blushed, and admitted “I isolated it on a bad-behaving server, and now I understood why. Reading the command history I noticed that a user entered it as root, downloaded (and compiled!) the malware and fired it: after that, he could log in in stealth mode, and do whatever he wanted. Fortunately, he just tried a few commands.”
The dodger was a script kid after all, and left a number of clues of his actions everywhere. He even left his source code on his workspace. Anyway, the inherent to such situations is fairly high: still higher when the server faces the Internet, keeping all the default options of configuration disregarding the basics of IT security.
Among other things, the code had a link to a peculiar IP address, a sort of “signature”; on an afterthought I could easily show that such link opened a true “wonder shop”, a real arsenal of utilites and documentation related to hacking and cracking.
The big luck here is that the majority of the script kiddies is so overwhelmed by how easy violating others’ computers is, that they don’t even read manuals, caveats or other documentation needed to do a clean job.
There is a practice of orally handing down all the installation and use methods for those programs: this custom keeps the job of the security manager much easier.
I was told a nice anecdote about it: we are in the CS lab of a big university, and a student (he/she could be either a freshman or a sophomore) calls his tutor, complaining that “the C compiler does not work”. The tutor approaches the PC, throws an eye to the code, then sees something that flashes his attention. The code thta “did not compile” should create a shell that gave the user the option to recall all the programs on the server with maximum privileges. Obviously the student did not have the skills to operate such code, and the University had other security systems to inhibit such malicious behaviour. What I want to underline, here, is how easy and controlless is sometimes the path to access private data.
Security does not mean ease about our data intangibility and our machines’ inaccessibility. Security means knowledge of the risks, and continuous audit.
It is useless to implement a data backup routine when there is no one that checks it routinely (the risk here is to lose older backups by overwriting them with erroneous data modified after an attack), but the ease for the state of a system whose security has been tested once is useless as well, because new intrusive technologies that take advantage of software bugs or weakness are continuously developed.
Post scriptum
Bruce Schneier was used to repeat that there exist only two classes of cyphering systems: one that allows you to hide your data to your 5-years old sister, and one that grants your privacy against the government. Latest news from Wikileaks and NSA make us understand which security class is most used today.
Sure enough, cyphering is not all: we have digital signage systems, electronic certificates, procedures and processes tightly connexed to the corporate security, technological infrastructures, abstraction layers… and post-its.
In such cases, the level of security is equal to its weaker link: it is not so rare to meet users that stick yellow, visible post-its near to their PCs to “remember” the passwords to access successive layers of security, or managers who forget to monitor the corporrate telephone lines from where any potential hacker with a portable PC and modem could penetrate inside the corporate intranet.
It is well known how the violation of sensitive data is mostly due to corporate staff; anyway, far from justify the lowering of the defenses, such behaviour should take to rethink not just the technical infrastructure, but also the control procedures. And here we come again to face bureaucracy: the procedures to evaluate data management risks inside the corporate is gathered inside the ISO 27000 regulations; to obtain the ISO 27000 certification, the corporate should front a long series of structural, environmental, regulatory, behavioural, juridical and processual changes, and agree to periodic and sample checks about the regulations application. This means a huge expense, to be considered as a long-term investment, because the procedures specify as far as the kind of modules to release the security policies in case the Security manager leaves.
Even if economically and logistically heavy, such method should be seen positively foreseen in anticipation of the better enterprise security, but then… What happens when an ISO 9000 certified corporation fails a sample check? Easy: it is rebuked by the Control Authority, and forced to realign through the next three months, or loses the certification. A well-thought process in case of the ISO 9000 regulation, but a disaster in case of the ISO 27000 regulation: a bad mark on a sample check would simply announce that “the corporation XYZ is not aligned to the the security policies: please feel free to bomb it in any way you know”. Or just hide such lack of good marks during the check, waiting for a security rebalancing. Here is the confirmation that the regulations hold an exquisitely bureaucratic meaning, and give nothing from the standpoint of corporate security in case of their violation.
Top Comments