Pages

Tampilkan postingan dengan label IT Security. Tampilkan semua postingan
Tampilkan postingan dengan label IT Security. Tampilkan semua postingan

Minggu, 03 Maret 2013

Redirection and decryption of mobile traffic: Is your browser a MitM?


Takeaway: By design, certain mobile web browsers send HTTPS-encrypted traffic to their home servers first. Michael Kassner finds out why, and what it means to each of us.
If you think HTTPS traffic from your mobile web browser travels unaltered, and safely encrypted all the way to the remote web server you requested information from, don’t be so sure. Opera Mini developers were asked:
Is there any end-to-end security between my handset and - for example - paypal.com or my bank?
The answer:
Opera Mini uses a transcoder server to translate HTML/CSS/JavaScript into a more compact format. It will also shrink any images to fit the screen of your handset. This translation step makes Opera Mini fast, small, and also very cheap to use. To be able to do this translation, the Opera Mini server needs to have access to the unencrypted version of the webpage. Therefore no end-to-end encryption between the client and the remote web server is possible.
To rule out any doubt:
If you need full end-to-end encryption, you should use a full web browser such as Opera Mobile.
Just to be clear “end-to-end encryption,” in this case, means HTTPS (encrypted) traffic travels to a remote web server, a bank for example, unhampered (not decrypted).
I don’t use any of Opera’s web browsers. I’ll be honest, even if I did use Opera, I would not have known about the redirection. I only started checking what mobile web browsers were doing after a colleague informed me the tech press crucified Nokia for doing something similar.

How it started

The upheaval about mobile web browsers started when Gaurang Pandya, Infrastructure Security Architect at Unisys Global Services India, determined HTTP web-browser requests on his Nokia phone were unexpectedly redirected to Nokia servers. Gaurang explains on his personal blog site:
It has been noticed that internet browsing traffic, instead of directly hitting requested server, is being redirected to proxy servers. They get redirected to Nokia/Ovi proxy servers if Nokia browser is used and to Opera proxy servers if Opera Mini browser is used.
Then Gaurang tried to sidestep the redirection:
I could not see any straightforward way to bypass this proxy setting and let my internet traffic pass through normally. This behavior is noticed regardless of whether the browsing is done through 3G or Wi-Fi network connections.
Gaurang wasn’t done; he decided to see if the same applied to HTTPS web-browser requests. He found his answer, posting his findings in this blog post:
[I]t is evident that Nokia is performing a Man In The Middle Attack for sensitive HTTPS traffic originated from their phone, and hence they do have access to clear text information which could include user credentials to various sites such as social networking, banking, credit card information, or anything that is sensitive in nature.
Needless to say, Gaurang’s comments garnered a great deal of attention. The blog post received 10,000 views in the first 24 hours, and currently has 20 pages of comments debating if redirecting traffic could “officially” be called a Man in the Middle attack or not. I’ll get to that later. Right now, I’d like to focus on the comment by Mark Durant, Nokia Communications:
We take the privacy and security of our consumers and their data very seriously. The compression that occurs within the Nokia Xpress Browser means that users can get faster web browsing and more value out of their data plans. Importantly, the proxy servers do not store the content of web pages visited by our users or any information they enter into them. When temporary decryption of HTTPS connections is required on our proxy servers, to transform and deliver users’ content, it is done in a secure manner.
This confirmation by Nokia virtually silenced those disagreeing with Gaurang’s results.

Why do it?

Why go through all this? The developers had to know there would be push back from people concerned about privacy. As alluded to in the above quote, it’s all about reorganizing the web page for speed and viewing on a mobile device. The question then becomes what does a Man in the Middle attack, a proxy redirection, or whatever you want to call it have to do with improving the mobile web browsing experience?
Mobile web browsers are not as powerful as the ones installed on computers. To help, Nokia and Opera shift most of the rendering work from the mobile web browser to the web browser’s home servers, which after optimizing the web page code send the web page information back to the mobile web browser for viewing.
The problem is when the traffic from the mobile web browser is encrypted (HTTPS). The Nokia or Opera servers are unable to manipulate the web page response. So, Nokia and Opera have altered their mobile web browsers to set up an encrypted link to their servers. That is, the HTTPS traffic we see, is also the HTTPS traffic Nokia and Opera can decrypt as they have the encryption keys.
It might help to look at one of Gaurang’s tests. He was watching what happened when his Nokia mobile web browser sent out a website request for Google.com.
Here are the steps:
  • Mobile web browser attempts to connect to https://www.google.com.
  • Connection is redirected to https://cloud13.browser.ovi.com (Nokia server as seen in the above slide).
  • The mobile web browser receives a valid HTTPS certificate for cloud13.browser.ovi.com, notGoogle.com.
  • The server behind cloud13.browser.ovi.com makes a connection to https://www.google.com, acting as the mobile web browser by proxy.
  • Nokia’s server replicates requests, and replies between the mobile web browser and Google.com.
One way to look at it — there are two distinct encryption processes taking place, one at the mobile web browser, and one at the Nokia server. The issue then becomes whether we are comfortable with Nokia, Opera, or whichever mobile web browser developer intercedes, having the ability to do whatever they want with what we consider sensitive information, otherwise, why would we be using HTTPS encryption?

Workarounds

I’ve been trying to determine which mobile web browsers use this approach, but wading through privacy policies, and contacting the developers is slow going. For now, the best approach may be to assume any mobile web browser that displays HTTP, and particularly HTTPS web pages differently than on a computer should be suspect.
I already use a proxy service with my computer and mobile devices, so I believe I unknowingly have been avoiding this issue. This may be an alternative solution for those who are concerned about sensitive information being controlled by yet another organization.

Final thoughts

I should have been aware of HTTPS traffic redirection. I’ve written two articles, “Ashkan Soltani introduces MobileScope, an innovative approach to online privacy” and “Find out which mobile apps are stealing your identity,” where both featured applications employed redirection techniques.
I also wanted to mention Gaurang has an update on his blog, stating Nokia still uses HTTPS proxy redirection, but no longer employs MitM technology, a good sign they listened. I was asked why Nokia was getting so beat up about using redirection, and not Opera. I would have to say it is because Opera was up front about it, and Nokia was not.
I’d like to end with a quote from Bruce Schneier, well-known security expert:
This is an area where security concerns are butting up against other issues. Nokia’s answer, which is basically ‘trust us, we’re not looking at your data,’ is going to increasingly be the norm.
I want to thank Gaurang for his research, and allowing me to use quotes and slides from his blog site.

Rabu, 27 Februari 2013

Insider threats: Implementing the right controls


Takeaway: Describes the signs that an employee might become an insider threat and recommends the various controls and monitoring that can be implemented to mitigate such threats.
In Part 1 of this two-part series, I explored the three primary types of insider threats: theft of intellectual property by its creators, fraud by non-management personnel in critical need of cash, and damage to information resources by IT administrators. In Part 2, we examine what to look for in employee behavior as signals that something bad has or will happen. We also look at timing and controls for mitigating insider risk.

The signs

Most employees provide unintentional signals when they’re under significant pressure or when they perceive management is abusing them. Figure A is a list of possible signs that an employee is about to go rogue. In short, any significant change in behavior can be a sign that an employee’s loyalty is waning, including (from Prevent your employees from “going rogue“):
  • Appearing intoxicated at work
  • Sleeping at the desk
  • Unexplained, repeated absences on Monday or Friday
  • Pattern of disregard for rules
  • Drug abuse
  • Attempts to enlist others in questionable activities
  • Pattern of lying and deception of peers or managers
  • Talk of or attempt to harm oneself
  • Writing bad checks
  • Failure to make child support payments
  • Attempts to circumvent security controls
  • Long-term anger or bitterness about being passed over for promotion
  • Frustration with management for not listening to what the employee considers grave concerns about security or business processes
Figure A from Prevent your employees from “going rogue
Employees often behave themselves in front of their managers. Consequently, a problem employee’s peers are the best monitoring tool an organization has. Train all employees to watch for signs of discontent. Providing a means of anonymously reporting peers to management is often the best approach to dealing with concerns many employees have of “not getting involved” or being labeled a tattletale.

Designing the right controls

As with any threat, the controls framework must consist of administrative, physical, and technical components.  The overall control design should enforce separation of duties, least privilege, and need-to-know.  A miss in any of these areas weakens your ability to deal with inevitable insider threats.

Administrative controls

Policies form the foundation. Clear statements of management intent serve two purposes. First, they make it clear to all employees what is and is not acceptable behavior and the consequences of behaving in unacceptable ways. Second, when supported by well-documented standards, guidelines, and procedures, they provide all employees with the capability to identify anomalous behavior in their peers, subordinates, and supervisors. Policies define acceptable behavior and enable every employee to detect rogue behavior.
The two objectives of policies described above are achieved only if all employees are aware of management’s expectations and how they affect each employee’s day-to-day work environment.Security training and continuous awareness activities fill this need.

Physical controls

Physical controls serve to deter, delay, detect, and respond to unauthorized personnel. Further, they control who can access physical resources (e.g., servers, routers, and switches) and when. The use of electronic physical controls adds logging and near-real-time oversight to physical access.
In many organizations, physical security is managed outside the security team. This does not mean, however, that security managers should simply ignore it. Any physical access to information resources circumvents most, if not all, technical controls. Understanding how to conduct a physical security gap analysis is the first step in engaging in the physical controls discussion.

Technical controls

Technical controls include identity management, authentication, authorization, and accountability. These control categories work together to reach the following access control objectives:
  • Identity management ensures each person and computer is assigned a meaningful set of attributes for use in the authentication and authorization steps. The identity provides a subject (an entity attempting to access a resource) with a manageable, trackable presence across an enterprise.
  • Authentication is the process of making an entity prove it is who or what it claims to be. Common controls include passwords, biometrics, and smartcards.
  • Authorization is the process of using the subject’s attributes to determine what it can access (need-to-know), what it can do with what it accesses (least privilege), and when access is allowed. In addition, authorization enforces both static and dynamic separation of duties. Separation of duties prevents any single subject from performing all tasks associated with a business process.
  • Accountability includes auditing, monitoring, and ensuring security teams understand what subject accessed a critical resource, when the resource was accessed, and what was done. In addition to monitoring authorized access, security teams should receive alerts when the number of unauthorized access attempts exceeds a predefined threshold.
Separation of duties and least privilege are two primary constraints limiting what an insider can achieve. For example, an organization in which separation of duties and least privilege are enforced makes it difficult for a payroll clerk to commit fraud. The clerk wouldn’t be able to modify employee records AND enter time worked information AND approve payroll AND print checks/perform electronic transfers AND pick-up or distribute payments. To execute all of these tasks would require collusion: enlisting others in the theft.
Another example of separation of duties is preventing developers from placing new or modified applications into production. All code changes should be governed by a strict, closely managed, and distributed change management system. This helps prevent a developer or administrator from placing damaging code into production systems.
When assessing least privilege, consider whether the organization should allow copying of information to mobile storage devices (e.g., thumb drives, laptops, smartphones, etc.). Is it really necessary for everyone to remove information from within your organization’s trust boundary? Similarly, what is the risk associated with allowing employees to access personal email accounts and file transfer services (e.g., Transferbigfiles.com) while at the office? Actually, it depends.

Monitoring and filtering

When attempting to detect internal threat actions, start with a good security information and event management (SIEM) system. The SIEM solution looks for anomalous behavior based on activity across one or more devices. It supports prevention and response controls and processes. Finally, be sure to enable logging for access to your valuable files, financial systems, and other critical systems.
Filtering solutions support monitoring in two ways. First, all data transfers are checked for sensitive information. With some systems, application of business policies prevents or restricts certain types of transfers. Filtering is also a great method of tracking what goes out via email. In any case, alerting is key when a questionable transfer occurs: including a large file transfer at an odd time or between questionable locations.
NetFlow analysis supports filtering and logging solutions by identifying unusual activity across network segments and between systems. Often, it ships with the SIEM solution, so an organization doesn’t have to purchase an additional product. Once tuned to accept normal traffic patterns, it is a valuable tool for identifying anomalous data transfers.
Second, we can simply deny employees access to Internet locations used for extracting stolen data. Products like Websense or OpenDNS allow organizations to control access to external email and data transfer/storage sites. Blocking access is critical if no filtering solution exists. It is also critical during an employee’s transition.

Timing

According to the CERT Insider Threat Center, most thefts of intellectual property occur during the month before and the month after an employee leaves the company. This timeline also applies to IT insiders placing time bombs, back doors, etc., into production systems. Regardless of whether or not anyone reports one or more of the behaviors listed earlier, it is simply good security to check the past behavior of an employee once he or she gives notice.
Behavior checking should include accounts created, files accessed, data transfers completed, and any other activity relevant to moving data out of your network. Checking for unusual or seldom used administrator accounts is important.  However, organizations shouldn’t wait until someone gives notice before they audit privileged accounts. This should be part of normal auditing processes.
Finally, fraud usually takes place over long periods having nothing to do with when an employee leaves. In fact, leaving employment denies an insider access to the collusion-based network necessary to continue the flow of ill-gotten gains. Auditing and employee education are the best monitoring tools available for fraudulent behavior in process.

The final word

Trusted employees can go rogue for a number of reasons, some of which have nothing to do with how they’re treated at the office. While the reasons might vary, the insider-driven financial damage suffered by businesses each year demonstrates the need for closer monitoring of all key employees. I am not implying all employees are dishonest. However, the time will come when someone you trust crosses the line.
Detecting those that plan to do harm is often very difficult unless employee awareness, monitoring, alerting, and response are in place. Further, consider detailed analysis of a departing employee’s system and network behavior in accordance with clearly documented and distributed policies.

Rabu, 20 Februari 2013

Manage insider threats: Knowing where the risks are

Takeaway: Ddetails the insider threats that an organization should be prepared to defend.
Too often, we view insider risk as a homogenous threat landscape; employees with access do bad things and there is business impact.

While this description is somewhat accurate, it doesn’t provide enough information with which to manage risk. What we need is a deeper look at what types of threats exist, the business roles involved, and the signs that typically exist when an employee, vendor, etc. is not complying with policy, law, or ethics. Armed with this information, organizations can implement administrative, technical, and physical controls to mitigate insider risk.
In this opening article, we look at the three categories of insider threats as defined in The CERT Guide to Insider Threats: How to Prevent, Detect, and Respond to Information Technology Crimes (Cappelli, More, & Trzeciak) and at The CERT Insider Threat Center. In Part 2, we will discuss recommended methods for detecting, containing, and responding to insider threats planned, in progress, or completed.

Insider threat defined

Defining insider threats requires an understanding of who and what are involved. The three primary categories of associated attacks are theft of intellectual property, fraud, and damage to information resources. In each category, CERT research tells us that a specific business role is usually responsible. See Table A.

Table A


Intellectual property theft

Intellectual property (IP) is any “creation of mind” created or owned by an organization. For our purposes, examples include
  • Engineering designs/drawings
  • Software created in-house
  • Trade secrets
In many situations, the creators of IP (engineers, software developers, etc.) believe they have ownership rights. In others, financial gain or professional advancement is the driver for theft. The tipping point from good to rogue employee usually happens when creators don’t receive recognition for their work or when they don’t perceive themselves as adequately compensated and appreciated. CERT lists several objectives for IP theft, including
  • Starting a new business
  • Providing a competitive advantage to a new employer
  • Providing it to a foreign country (especially a country with which an employee has cultural, political, or ethnic ties)
Because people allowed access to IP are most likely to steal IP, detection can be difficult. However, close attention to common IP removal paths is the first step in mitigating risk from IP loss, including
  • Company email
  • Remote network access
  • Storage on laptops and other mobile storage devices
  • File transfer services (e.g., FTP or SFTP)

Fraud

Fraud is theft of financial assets. Employee fraud is much more common than most organizations believe. In an article at CFOOnline.com, Tracy L. Coenen writes, “Experts estimate that on average it costs companies 3% to 5% of revenue each year.” For example, a payroll clerk creating a false employee, paying that employee, and then collecting and cashing the check commits fraud. Other types of fraud include misuse of expense accounts or payment to vendors when they provide no services or products. People deep in debt with no hope of digging themselves out tend to top the list of insider threats in this category.
Fraud occurs when three conditions are met, as shown in Figure A. Pressure is usually a seemingly overwhelming financial need. Opportunities consist of vulnerabilities in an organization’s processes, security, etc. that allow a pressured employee to steal with little chance of detection. Rationalization occurs when an employee convinces himself that his need is greater than ethical or moral concerns. An employee might also rationalize theft based on how she perceives management mistreatment or ingratitude for the business value she’s provided. Removing one side of the triangle eliminates or significantly reduces risk from fraud.

Figure A

Fraud Triangle Developed by Donald Cressey
Fraud occurs across many channels, and involvement might extend beyond employees to external criminal individuals or organizations. Again, employees resorting to fraud usually seek financial gain. Methods include
  • Selling stolen information
  • Modifying information to realize financial gains for self or others
  • Receiving payment for adding, modifying, or deleting information
Most employees committing fraud avoid complex technological pathways. For example, the last two examples above simply require alteration of a database without removal of data. When data is removed, it is often downloaded to a home computer, copied to mobile storage, faxed, or emailed.

Damage to information resources

Damage to information resources is usually an attempt to break one or more business processes, thereby resulting in significant harm to the business. In most cases, only someone with administrator access can successfully achieve these goals. For example, a programmer might plant a logic bomb that destroys a database, irreparably damages server software, or causes an application to perform in unexpected ways. In addition to logic bombs, reconfiguration of network devices in ways that cause significant loss of productivity is a surreptitious malicious act often difficult to remediate.
Administrators don’t always want to make themselves known with a large, visible event. Rather, creation of additional administrator accounts often provides an attacker with long-term access for small but costly hits against a current or former employer. Organizations without proper log management would have a very difficult time assigning responsibility when the rogue account is eventually identified.

Collusion

Employees don’t always have access to everything needed for theft or system damage. Many organizations raise barriers with separation of duties enforced with role-based access control. Enterprising insider threats circumvent these controls using collusion.

What is collusion?

Peter Vajda writes, “Collusion takes hold when two (or more) individuals co-opt their values and ethics to support their own - and others’ - mis-deeds.” The key word is support. While an engineer, for example, might have full access to all relevant components of the IP he or she intends to steal, a payroll or accounts payable clerk might not. Consequently, the person planning the theft might recruit key employees with access to information or processes otherwise unavailable.
It is usually the most trusted employees who commit these crimes. Collusion increases the risk for the perpetrators, but it also decreases the opportunities to detect theft. Bypassing separation of duties via collusion circumvents a key control. According to CERT research, it isn’t uncommon for multiple individuals (including outsiders) to participate in long-term fraud.

Probability of collusion

Managers like to believe their employees will behave with integrity, but collusion is a common cause of insider risk. According to a Fraud Matters Newsletter article posted at the EFP Rotenberg website, “Collusion accounts for as much as 40 percent of fraud, with median loss of approximately $485,000-nearly five times that of crimes perpetrated by an individual alone.” The amount of loss from fraud associated with collusion significantly elevates associated risk to levels needing close attention by security teams and management.

The last word

Insider threats can potentially cost organizations a great deal each year through loss of IP, fraud, and damage to information resources. Each threat category largely involves a specific role or set of business roles and different attack vectors. In Part 2, we will explore recognizing problem employees and implementation of controls to mitigate insider opportunities.

Sabtu, 07 Juli 2012

Microsoft security competition: A model for the future?

Takeaway: Patrick Lambert shares his perspective on Microsoft’s BlueHat security competition and its $200,000 prize.

Last year, Microsoft quietly announced a security competition called BlueHat, back in August of 2011. It didn’t garner much media attention since it was fairly obscure and aimed at security professionals — the types that work in academia, research firms, and so on. Now, they announced the three finalists last week, and the results are interesting to look at, because not only is the contest itself a new event in the software industry, but the results may be a hint of what is to come in future versions of Windows, and other Microsoft software. While a lot is still in the early stages and speculative, there is a common thread, and that can give us some clues as to what the ultimate benefit of this contest will be for Microsoft, IT pros, and eventually common users.

First of all, the competition itself is fairly attractive for those with the know-how to participate. The first prize is worth $200,000, which is far more than any company gives for offensive security. Typically, software makers offer bounties to hackers or other security researchers when they find a bug or an exploit that could lead bad guys to take advantage of their software. The bounty system is well established by now, where a hacker can make easy money by giving the exploit to the company, which would dissuade them from releasing it in the wild, or exploiting it themselves. All the large companies like Google, Apple and Adobe offer such a system. But with the BlueHat competition, Microsoft called it the first “defensive security” system. From its blog post, the company says that while most industry players stick to offensive security, Microsoft thinks that, in the long run, a defensive approach will work better.
So, after the table was set, the contest launched to the public. In all, Microsoft received 20 entries, which is a fairly small number, but we have to remember that the bar was quite high to enter. Here, we’re talking about submitting brand new proposals to make Windows and other Microsoft products fundamentally harder to attack from a security standpoint. In their words, they were looking for runtime mitigation technologies designed to prevent the exploitation of memory safety vulnerabilities. One interesting note that the post mentions is that some of the best entries happened to be those submitted at the very last minute, even seconds before the deadline. But let’s take a look at who the finalists are. All three of them submitted new proposals that will earn them the various prize monies, and their proposals are listed on the finalist’s page.
  • Jared DeMott, security researcher: This novel defensive lowers the effect of address space disclosures and mitigates known return-oriented programming (ROP) exploits.”
  • Ivan Fratric, security researcher: “ROPGuard is a system that can detect and prevent the currently used forms of ROP attacks at runtime.”
  • Vasilis Pappas, Ph.D. student: “This proposed technique is called kBouncer, an efficient and fully transparent ROP mitigation technique.”
So while the details quickly get highly technical, it’s not hard to see a pattern here. It seems like the top people in the security community agree that the way to solve one of the most problematic issues in software security is to have ways to deal with return-oriented programming (ROP) attacks.
Attacks come in many forms, from buffer overflows to brute force attacks, but Microsoft, like every other major software maker out there, has added a lot of low-level systems to prevent a lot of those malwares and viruses from working in the first place. Something like DEP, or data execution prevention, is a huge deal that was added to the Windows core a few years ago. By itself, it can prevent code from being executed in user memory, in places where only data should reside, and not binary programs. Ironically, this is when ROP started to become so popular, because it’s a way to bypass DEP, among other things.
Basically, ROP attacks allow execution of code in the presence of non executable memory segments, and without the need to sign code either. It’s a way to get malware to be executed on computers without the user knowing it. So the best way to deal with these types of attacks right now, everyone agrees, is to deal with ROP.
So right now, what does this mean for you and me? For one thing, it’s clear that Microsoft hasn’t figured out how to deal with all the malware out there, and that’s why they created the contest and offered such a generous prize. If one of those contest entries works, and manages to remove ROP attacks completely, we could see the landscape of Windows malware change drastically in the near future, with many of the attack vectors used becoming completely useless. Then, it could also lead other software companies to start dealing with defensive security as well as offensive bounties. This could be a great opportunity for security pros to get recognition and focus on pre-emptive strategies to combat future threats.

Jumat, 18 Mei 2012

Malware poses as software updates: Why the FBI is warning travelers

Takeaway: Those “critical update” notices you get, especially while traveling, may not be what you think. Michael Kassner gets the low-down on this serious threat as well as the Evilgrade platform.
As someone who writes about IT security, I like to think I can recognize digital trouble when I see it. Recent events suggest that’s not the case.

Case in point



Last week, I received a call from a company vice president traveling in Sweden. “Yes sir, how can I help?” I asked after mandatory discussion on the likelihood of a new football stadium for the Vikings. “Just wanted to check,” he replied. Bless him. “TeamViewer is asking to update. Should I allow it?”
I was about to say sure. But, I stopped short. Why hadn’t my computer mentioned anything about updating? I’ve been using TeamViewer all day. In what some would call a “CYA” move — I prefer “discretion is the better part of valor” — I told the vice president to wait until he got back; something seemed wrong.

What’s up?

After I got off the phone, I tried to update TeamViewer on several notebooks that haven’t been used recently — all were up-to-date.

Okay, something’s funky.
None of my IT cohorts were aware of any issues. Fortunately, friend and fellow journalist, Brian Krebs was. His post: FBI: Updates Over Public ‘Net Access = Bad Idea pointed me in the right direction. In the post, Brian referred to this FBI E-Scam and Warning newsletter:
“Recently, there have been instances of travelers’ laptops being infected with malicious software while using hotel Internet connections. In these instances, the traveler was attempting to set up the hotel room Internet connection and was presented with a pop-up window notifying the user to update a widely used software product.”
My gamble to have the vice president wait was fortunate indeed. The FBI alert continues:
“If the user clicked to accept and install the update, malicious software was installed on the laptop. The pop-up window appeared to be offering a routine update to a legitimate software product for which updates are frequently available.”
Sure sounds like what happened to the vice president. If that’s not bad enough, Brian mentioned something equally troubling in his post:
“Bear in mind that false update prompts don’t have to involve pop-ups. I’ve written about Evilgrade, a toolkit that makes it simple for attackers to install malicious software by exploiting weaknesses in the auto-update feature of many popular software titles.”

Evilgrade

Evilgrade takes it a step further. If applications have permission to auto-update, it’s possible for Evilgrade to hijack the auto-update feature, install malware instead of an official update, and the user is none the wiser.
Francisco Amato, the creator of Evilgrade mentions how the attack starts:
“This framework comes into play when the attacker is able to redirect traffic in one of the following ways: DNS tampering, DNS Cache Poisoning, ARP spoofing, Wi-Fi Access Point impersonation, or DHCP hijacking.”
Remember the FBI alert referring to hotel Internet connections? Attack tools like Evilgrade are the reason. Unlike company networks, public networks at hotels and cafes — particularly ones with open-access — aren’t secure, thus perfect for setting up one of the above attacks.

Vulnerable applications

Surprisingly, a way to defeat malware like Evilgrade already exists — digital signatures. And some companies already use them extensively. For example, if a Microsoft-based computer does not receive the correct digital signature with an update, a window similar to the following slide will pop-up.

Unfortunately, not all app developers integrate digital signatures. And just our luck, the bad guys know which they are. Notice that TeamViewer is on the list:
  • iTunes
  • Java
  • Opera
  • Quicktime
  • Safari
  • Skype
  • Teamviewer
  • Vmware
  • Winamp
If curious, the Readme.txt for Evilgrade has a more comprehensive list of vulnerable applications. Amato also created a YouTube video demonstrating how Evilgrade works.

What to do?

About a year ago, Brian came up with his “Three Basic Rules for Online Safety,” it might be a good time to review the first rule: If you didn’t go looking for it, don’t install it! Brian explains:
“If you must update while on the road, make sure that you initiate the update process. Avoid clicking pop-up prompts or anything that looks like it was launched from an auto-updater. When in doubt, always update from the vendor’s website.”

Final thoughts

I got lucky this time. I still need to thoroughly scan the vice-president’s notebook — even though he didn’t allow the update. The notebook was under attack and the second wave might have been Evilgrade.

Kamis, 17 Mei 2012

Windows malware: are you safer today than you were 10 years ago?


By  
Summary: In 2002, after a series of widespread, high-profile, and highly embarrassing Windows-related security incidents, Bill Gates wrote his now famous “Trustworthy Computing” memo. So what’s happened in the intervening 10 years? Plenty. Take a trip with me down bad memory lane…
They don’t make malware like they used to.
That’s not a setup for a joke—it’s a fact. As I was researching some recent columns on malware outbreaks for PCs and Macs, I found myself reading old articles about computer security from the beginning of the 21st Century. Some of those articles and the threats they describe seem downright quaint in retrospect, while others were positively prescient.
During my research, I bookmarked a lot of web pages and made copious notes about threats that gave IT professionals ulcers and PC support staffs headaches in their time. And it struck me that the cat-and-mouse game between malware authors and their targets has evolved dramatically during that time.


In depth: Ten years of Windows malware and Microsoft’s security response

Take a trip down bad memory lane and revisit some of the worst offenders of the last decade. from primitive but effective early efforts like Blaster (2003) and Zlob (2005), to more deadly modern threats like the Zeus botnet and the Alureon (aka TDL4/TDSS) rootkit. During that same time, Microsoft was introducing its Patch Tuesday update program, the Malicious Software Removal Tool, and a variety of legal and technical efforts that effectively neutralized some threats.
My timeline puts the bad guys’ work and Microsoft’s response into perspective. Here, for example, are the telltale marker of the Blaster worm and the XP SP2 Security Center, side by side:
 

My decision to go back 10 years is no accident. One of the watershed events in the ongoing battle between the white hats and black hats of PC security happened exactly a decade ago.
In January 2002, after a series of widespread, high-profile, and highly embarrassing security incidents that affected Windows customers and Microsoft itself, Bill Gates wrote his now famous“Trustworthy Computing” memo. Although it was viewed with some skepticism at the time, it really did represent a turning point for Microsoft and for Windows users.
Until that point, security was literally an afterthought. As a result of the Trustworthy Computing initiative, Microsoft introduced a massive change in the way it develops software. The Security Development Lifecycle has paid off hugely over the last 10 years and has been widely praised and copied.
The bad guys and their products have changed during the same time. At the beginning of the century, the most noteworthy attacks were calculated to wreak havoc and garner worldwide attention. Over the past 10 years, malware authors have gotten more skilled at monetizing their work, and they’ve also learned the benefits of stealth.
In addition to building a more disciplined process for writing secure code, Microsoft has improved its update infrastructure and worked closely with outside security experts and third-party developers to improve the way their products work. Over time, Microsoft has built its own antivirus and network intrusion software; now that the 2001 antitrust agreement has officially ended, that software will finally appear in Windows itself.
Microsoft’s record on security is far from perfect. In Windows XP, for example, it introduced an effective firewall and then chose to leave it turned off by default. That mistake was corrected in XP Service Pack 2. One of the most brutally effective vectors for malware over the past four years has been a feature called AutoRun, which made every USB flash drive a delivery vehicle for the Conficker worm. AutoRun was disabled in Windows 7 by default, but Windows XP and Vista users had to wait until 2011 for a Critical update that blocked that dangerous vector.
There is no question that you are more secure using a modern version of Windows than you were in 2002 using the initial release of Windows XP. At the same time, attackers are more sophisticated and more focused on financial gain.

Minggu, 13 Mei 2012

Preparing for the DNSChanger Internet outage

Takeaway: Alfonso Barreiro tells all you need to know to clean up the DNSChanger malware that has affected millions of users. Make sure your organization is prepared for the July 9, 2012 deadline that the FBI has set to shut down temporary “clean” servers.

If one were to believe some headlines, there’s an Internet apocalypse coming on July 9, 2012, when hundreds of thousands of computers will be unable to access the Internet because of actions by the FBI. But before anyone panics, let’s cut through the hype and take a look at what happened and how you can prepare your organization and users before the deadline approaches.

So, what is going on?

Last November, the FBI announced the successful shutdown of a major click-jacking fraud ring in a joint investigation with Estonian authorities and other organizations, including anti-malware company Trend Micro. Seven individuals, including six Estonians and one Russian, were charged with wire fraud and computer intrusion crimes. The investigation, dubbed, “Operation Ghost Click“, included the takedown of a botnet comprising nearly 4 million infected computers. Authorities raided datacenters located in New York and Chicago, removing nearly 100 servers. The computers that were members of that botnet were infected with the malware known as DNS Changer that has been in circulation since 2007.
The DNS Changer malware family silently replaces the Domain Name System (DNS) settings of the computers that it infects (both Windows PCs and Macs) with the addresses of the malicious servers and routers (yes, small office/home office routers that were still using their default admin usernames and passwords). Affected users then would be directed to sites that served malware, spam or large advertisements when they tried to go to popular websites such as Amazon, iTunes and Netflix. Additionally, some variants of the malware blocked access to anti-malware and operating system update sites to prevent its removal. The operators of this botnet would receive advertising revenues when the pages were displayed or clicked on, generating them over $14 million in fees.
Due to the potential impact the removal of these DNS servers would have on millions of users, the FBI had the malicious servers replaced with machines operated by the Internet Systems Consortium, a public benefit non-profit organization, to give affected users time to clean their machines. Originally these temporary servers were to be shut down in March, but the FBI obtained a court order authorizing an extension because of the large number of computers still affected. The new deadline is July 9, giving more time to those still infected to fix their computers. As of March, the infected still included 94 of all Fortune 500 companies and three out of 55 major government entities, according to IID (Internet Identity), a provider of technology and services.

How do I check if I’m infected?

If you are a network admin or IT pro, and you are pretty confident your organization is in the clear, you still may want to share these instructions with your users so that they are aware that their home systems could be infected and so that they can perform the self-checks.
Both the FBI and the DNS Changer Working Group have provided detailed step-by-step instructions for manually checking Windows XP, Windows 7 and Mac OS X computers for infection. Essentially, if your DNS servers listed include one or more of the addresses in the following list, your computer might have been infected:
  • 85.255.112.0 through 85.255.127.255
  • 67.210.0.0 through 67.210.15.255
  • 93.188.160.0 through 93.188.167.255
  • 77.67.83.0 through 77.67.83.255
  • 213.109.64.0 through 213.109.79.255
  • 64.28.176.0 through 64.28.191.255
If your computer checks out okay, you should also check your SOHO router settings. Consult your product documentation on how to access your router settings and compare its DNS servers to those on the list above. If your router is affected, a computer on your network is likely infected with the malware.
There are also several self check tools that can help check your machine. One such tool is provided by the DNS Changer Working Group at http://www.dns-ok.us/. This site will display an image with a red background if the machine or router is infected. On a clean machine, it will be a green background:

Figure A


There are several localized versions of this tool, maintened by different security organizations, each with instructions on how to clean up the infection (a complete list can be found here):
SiteLanguageMaintainer Organization(s)
www.dns-ok.usEnglishDNS Changer Working Group (DCWG)
www.dns-ok.deGermanBundeskriminalamt (BKA) & Bundesamt für Sicherheit in der Informationstechnik (BSI)
www.dns-ok.caEnglish/FrenchCanadian Internet Registration Authority (CIRA) and Canadian Cyber Incident Response Centre (CCIRC)
dns-ok.gov.auEnglishCERT Australia
dns-changer.euGerman, Spanish, EnglishECO (Association of the German Internet Industry)
The FBI also provides a form where you can enter the IP address of the DNS server configured on the machine:

Figure B


Depending on your organizations’ network configuration, you could set up alerts when machines from your internal network attempt to reach any of the listed addresses or you can block them outright. Be careful if you opt to block them though, as any infected machine will essentially lose its Internet connectivity since they won’t be able to resolve any Internet server name they attempt to reach. Of course, this will also be a big clue that something is wrong, if the support phone lines fire up on July 9 with users reporting mysterious Internet outages!

I found an infection! How do I fix it?

As with detection, there are also a number of tools available to fix an infection. Since the DNS Changer was delivered through different mechanisms over the years, some infections may be more difficult to remove than others. In some extreme cases, only a full reinstall of the operating system will ensure a successful repair. Some removal tools available include:
This is by no means a complete list; most anti-malware companies should be able to detect this particular threat. But be aware that your mileage may vary. DNS Changer was also part of some web exploitation kits and other types of malware (backdoors, keyloggers, etc.) might have hitched a ride and complicated the removal process. If you have an affected router, you should also change its default admin password to something else (and don’t use an easily guessable password - it will be only a matter of time before someone else tries a similar attack).

What if my machine remains infected after the deadline?

Machines that remain infected or are served by an affected router after the temporary servers are removed will, for all intents and purposes, lose their Internet connectivity. How to fix it will remain the same, but with the added wrinkle that you will probably need a second, clean machine with Internet access for diagnostics and to obtain removal tools.

Sabtu, 12 Mei 2012

What Microsoft can teach Apple about security response


By  
Summary: Microsoft just released seven security updates to fix 23 vulnerabilities in Windows and other products. In February, Apple released a massive update that covered 51 vulnerabilities and also introduced an embarrassing security flaw. The contrast is striking.
Security vulnerabilities are a fact of life. Even the best-managed development processes will miss some attack vectors, leaving the software makers responsible for fixing the underlying vulnerabilities.
Speed of response is important. But equally important is how a software vendor communicates with its customers about those vulnerabilities.
This month, we have textbook examples of the right and wrong way to handle security flaws, courtesy of the two companies that together ship nearly 99% of all personal computers.
Let’s start with Microsoft.
Today is Patch Tuesday, the day Microsoft designates each month for delivery of security updates to customers.
I used to be a skeptic about the concept, but Patch Tuesday has proven itself over time. Microsoft still reserves the right to deliver an “out of band” security update in response to threats that are being actively exploited and can’t wait. But overall the system has worked well.
The level of transparency in Microsoft security bulletins is impressive. Today’s announcements included seven bulletins, each with details about the vulnerabilities it covers, the possible impact, and the urgency with which IT organizations should respond. Three of the seven bulletins are rated Critical. (ZDNet’s Ryan Naraine has more details.)
Those seven updates address a total of 23 separate vulnerabilities. Bulletin MS12-030, for example, addresses seven vulnerabilities in Microsoft Excel. Bulletin MS12-034 closes nine security risks in a variety of Microsoft products, including the Microsoft .NET Framework, Silverlight, and Windows itself.
For each one of those exploits, the Microsoft Security team rates the likelihood that exploit code will be released. Six of seven bulletins this month earn a rating of “1 – Exploit code likely.”
In addition, as is its custom, the Microsoft Security Response Center published a blog post today that goes into detail about the issues involved in these specific vulnerabilities. The post includesvideos and deployment guidance. A separate post on the Microsoft Security Research & Defense blog addresses the technical issues involved in identifying vulnerabilities related to a patch first issued last year in response to the Duqu malware:
The Office document attack vector leveraged by the Duqu malware was addressed by MS11-087 – Duqu is no longer able to exploit that vulnerability after applying the security update. However, we wanted to be sure to address the vulnerable code wherever it appeared across the Microsoft code base. To that end, we have been working with Microsoft Research to develop a “Cloned Code Detection” system that we can run for every MSRC case to find any instance of the vulnerable code in any shipping product. This system is the one that found several of the copies of CVE-2011-3402 that we are now addressing with MS12-034.
If you are a consumer or a business user, you don’t need to know those details. You can install the updates and know that you’re protected from all the threats identified in those bulletins.
But if you’re an IT pro or a security researcher, those details are invaluable in helping you decide how to prioritize your testing and deployment plans for those updates.
Now, allow me to contrast that exhaustive security response and thorough communication strategy with the equivalent response from Apple, the developer of the world’s second most popular consumer operating system.
In February, Oracle issued a security patch to fix a critical Java vulnerability. Apple, which retains responsibility for delivering and maintaining Java SE Update 6, did not release their version of that patch until April 3, 49 days later.
During that seven-week window, more than 600,000 Apple customers were infected with malware simply by visiting a website they clicked in a list of Google search results. They did not indulge in unsafe behavior. They did not fall for social engineering or provide their administrator credentials. They did not know they had been infected, in fact. And now, by most estimates, several hundred thousand Mac owners are still infected with that malware, which contains a backdoor component that allows a remote attacker to download any software onto that Mac and to perform any action that the user can perform.
Apple, to this date, has acknowledged the existence of this malware only in a terse security bulletin, titled “About Flashback malware.” It has not explained how the malware works, nor how to remove it if one is running Mac OS X 10.5.
Another incident was less widespread but potentially more severe. Apple released update 10.7.3 to OS X Lion, its latest version, on February 1. That update addressed 51 separate vulnerabilities in OS X, of which 22 could result in “arbitrary code execution,” with one having the potential to execute arbitrary code with system privileges.
Given the sheer number of vulnerabilities fixed in that release, you’d be crazy to skip that update. But if you installed it, and you had previously encrypted your home directory using the version of FileVault included in Snow Leopard, a flaw in the update code would result in your system keeping a clear-text record of all login usernames and passwords in a file that any attacker could read with ease. The point of encryption is to prevent a thief from being able to access your data if he steals your computer. This blunder has the same effect as if you had written your PIN code on your ATM card and then had your wallet stolen.
This issue was first reported on an Apple support forum on February 6, five days after the update was released. It was publicized to the Cryptome mailing list on Friday, May 4. It has been widely reported in the media over the past 96 hours.
And yet Apple remains silent. The company has not published a support document acknowledging the issue. It has not offered any advice for affected Apple customers on how to tell whether they are a victim of this bug and, if so, how they can remediate it.
More importantly, no one has explained how such a horrendous security gaffe could pass code review and make it into the public release of a crucial OS X security update. If this kind of mistake can happen, who knows how many smaller, potentially more serious mistakes might also have slipped in to what are supposed to be security updates? And what does that kind of boneheaded code mistake say about the quality of iOS?
With great fanfare, Apple hired Window Snyder more than two years ago, with the avowed goal of helping to secure the Mac ecosystem. Snyder worked for Microsoft for several years before moving to Mozilla to work on securing Firefox.
Last year, Apple hired David Rice, a security superstar from the U.S. Navy,  as its global security director. His name and title are nowhere to be found anywhere on Apple’s website.
Despite that influx of talent, Apple in the past year has been hit with its two biggest malware attacks in history, and the company’s response has been weak and mostly ineffectual.
As far as I’m concerned, Apple has serious work to do to restore its customers’ confidence. That work needs to start with a competent Chief Security Officer and a commitment to communicate with its customers about security issues. And it needs to cooperate with independent security researchers and its competitors. And yes, that includes Microsoft, which has a tremendous amount of knowledge gathered over more than a decade.
Security response is a cost of doing business. With $100 billion of cash on hand, Apple could afford to attack the security problem head-on. Instead, the company seems to be sticking its head in the sand.