Saturday, January 23, 2010

Tech business lesson for the New York Times

 A few days back the New York Times published a rather alarming (or alarmist) piece prompted by the Chinese hacking of Google. (If you have not heard of that, crawl out from under that rock and read any newspaper) That piece is filled with approximations which are probably not very informative for the public at large.

The crown jewels of GoogleCisco Systems or any other technology company are the millions of lines of programming instructions, known as source code, that make its products run. If hackers could steal those key instructions and copy them, they could easily dull the company’s competitive edge in the marketplace.

 Is source code important? Yes! Would having Google's source code make my day? Yes! Would it allow me to beat them at their own game? No! Google and most other tech firms actually are not resting on their laurels with the best source code around. Google has data centers around the world to host their data and provide us with services. They have earned the trust of advertisers promising to give them a fair deal despite rather none-transparent (unless you are very good at math) pricing mechanisms. They have earned the trust of millions of people who hand over, their emails, voice mails, medical records, trips etc... Would I trust some shady Chinese hackers possibly backed by their totalitarian government with my private data? No! Would you? I hope not. Furthermore, what happens when Google rolls out the next Google Maps, or the next Gmail? Because they can keep on doing that. They have some of the brightest minds of the industry and a corporate culture which fosters that kind of innovation. Those things do not come with the source code. And Google is by far not an exception. Microsoft Windows is probably far from being the best operating system in the world. Yet they have a market share that dwarfs every other OS on the market. Even free ones! How? Well, the answer is complicated, involving marketing, deals with manufacturers, network externalities and more, but one thing is such: it's not the source code!

More insidiously, if attackers were able to make subtle, undetected changes to that code, they could essentially give themselves secret access to everything the company and its customers did with the software.
The fear of someone building such a back door, known as a Trojan horse, and using it to conduct continual spying is why companies and security experts were so alarmed by Google’s disclosure last week that hackers based in China had stolen some of its intellectual property and had conducted similar assaults on more than two dozen other companies.


 Alright, this is properly scary. If the Chinese government is snooping on my email, they probably won't get anything useful, but I would rather they didn't anyhow. Is it likely? Not really. If you are working on a big software project, you are probably familiar with version control systems. It keeps tracks of all changes made to the code and allows the possibility to roll back to previous version if the changes broke something. (A common occurrence, programming is one step forward two steps backward in general) But what that means is that if Chinese hackers introduced a back-door in Gmail

  1. It's probably confined to the alpha or beta-version
  2. There is a suspicious looking record somewhere that must scream out to a developer: "Why the heck did you change the code to include a backdoor?"
 It's possible such a thing will be missed but it's not very likely. I can imagine all users whose computers were affected by the break-in were instructed to exercise extreme vigilance and report anything suspicious.

Computer users around the globe have Adobe’s Acrobat or Reader software sitting on their machines to create or read documents, and Adobe’s Flash technology is widely used to present multimedia content on the Web and mobile phones.
“Acrobat is installed on about 95 percent of the machines in the world, and there have been a lot of vulnerabilities found in Flash,” said Jeff Moss, a security expert who sits on the Homeland Security Advisory Council. “If you can find a vulnerability in one of these products, you’re golden.”


 Again, properly scary. If they somehow turned Adobe Acrobat or the Flash Plugin into a trojan horse (a program used to access your computer without your consent) there is something to be scared of. However, Adobe is not like Google. They provide software not cloud computing for the most part. So unless the attackers managed to make changes, compile the source code (source code must be translated into machine code in order to have it be something other than a text file and that process can be very time consuming) and push it through the automated update system, it won't matter. And I'm sure that Adobe is quite careful to make sure that nobody has tampered with updates they will push out for the near future.

Given the complexity of today’s software programs, which are typically written by teams of hundreds or thousands of engineers, it is virtually impossible to be perfectly confident in the security of any program, and tampering could very well go undetected.
Companies are understandably reluctant to discuss their security failures. But one notable episode shows just how damaging the secret tampering with source code can be.


 Here, the authors of the article run counter to almost two hundred years of security research ever since Dr. Auguste Kerckhoffs made the argument against security through obscurity. (The practice of hiding what your security system is to prevent others from discovering vulnerabilities) If there is a flaw in your system, eventually, somebody will find it. That is just a fact. Now, you have two choices. You can make your system open to legitimate security researchers who will help you fix the vulnerabilities or you can hide it so only those interested in malicious activities will break it open. The article advocates the latter. In reality, security research is largely hit and miss. It's not about a single brilliant single individual finding the "skeleton key to the internet." It's about creativity, inventivity and originality. It's about thinking out of the box. And guess what? The security team who designed the system have a really hard time thinking outside the box they created. More people knowing how your system works is better.

 The second claim that the article makes here is that companies do not want to discuss their security failures. That is true. But there is nothing "understandable" about it. Who do you trust? The guy who tells you when he made a mistake or the guy who lies that everything is fine which he's driving you off a cliff. Being open about security issues is what makes customers trust your products. They know there are no known security issues because they trust you would have told them.

Alan Paller, director of research at the SANS Institute, a security education organization, said American technology companies had gotten better about protecting their most prized intellectual property by creating more complex systems for viewing and changing source code. Such systems can keep a detailed account of what tweaks have been made to a software product.

 Now, I don't know what he's talking about here for sure, but it does sound a lot like version control systems which I mentioned above. Those are not a couple years old and security is not their primary purpose. They are collaboration tools. They allow multiple people to work on a project and prevent a single mistake from forcing everyone to restart from scratch.

The New York Times is a great paper, but they really need to get better at some issues.

1 comment:

Anonymous said...

Hello everybody! I don't know where to start but hope this site will be useful for me.
I will be glad to get any assistance at the beginning.
Thanks in advance and good luck! :)