Our Mission: | To reduce your personal and business risks by deriving action items from recent news stories. |
Note: Brent LaReau is your point of contact for this blog.
« Previous 10 | Next 10 »
018
A Web Site Can Reset Your Samsung Phone to Factory Defaults
Permalink
Brent LaReau, designsbylareau.com
Posted: Oct 12, 2012
The headline shown above seems too extreme to be true, doesn't it? But in fact, if you are reading my blog on a specific type of Android phone made by Samsung, I could have instantly "wiped" your phone RIGHT NOW by simply embedding a specific "USSD" code here. You wouldn't even have had time to read these sentences. Instead, your phone would have begun to reset itself as soon as it loaded this web page. When it was finished you would have found:
- Your text messages were deleted.
- Your installed apps were gone.
- Your contact list (address book) was empty.
- Your photos were missing.
- Your e-mail messages were erased.
- Your Wi-Fi network passwords were forgotten.
- Etc.
(You can see all of my cartoons here.)
Fortunately, not all Samsung Android phones are vulnerable to this attack. Full details are not yet known, but at least we know the following Samsung phones are vulnerable:
- Galaxy S2 with Android 2.3.6 or 4.0.3
- Galaxy Advance
How did this enormous vulnerability creep into Samsung phones? There is a short answer and a longer answer. The short answer is that Samsung's software development teams created a "dialer" app for its Android phones, which will instantly execute any USSD code without asking the user to confirm this action. One of those USSD codes will wipe the phone (reset it to its factory original condition). And if any of those USSD codes is embedded in a web page, the code is immediately executed when the page loads.
That was the short answer. Now for a longer answer that some people won't like: agile software development methodologies permit companies like Samsung to make 10,000 software modifications—new features and bugfixes—each year in a frantic race against their competitors. We have to admit this rapid, incessant march forward is quite an accomplishment, but how can an agile design team consider the consequences of each software modification when their main goal is to do more and do it faster?
How can an agile culture that revolves around optimization of business processes (not security processes) avoid oversights and mistakes that place end users at risk—to the delight of malicious hackers, teenagers, and blog writers like me?
How can the "user stories" embraced by agile methodologies include security considerations when the "user" is just an average consumer instead of a security expert?
And how can the "unit tests" embraced by agile software developers even begin to address system security issues at all? Especially since many lean, agile teams act as if unit tests can replace integration testing and system testing!
[Update: February, 2013—Samsung isn't the only vendor that fails to assess security issues when developing its products and software. The Federal Trade Commission (FTC) has announced a settlement with HTC over complaints about lack of security in its mobile phone software. The FTC stated that HTC made little or no effort to address user security when HTC customized Android and Window Phone software for its smartphones. HTC's software was claimed to be sloppy; HTC didn't train its design teams in secure software development practices; HTC didn't perform any penetration testing on its mobile devices; and HTC's staff used software development methods that are well-known to be poor security practices.]
Let's peruse the facts of this case and generate some action items that we can use to reduce our risks:
- Fact: Galaxy S2 with Android 2.3.6 or 4.0.3, and Galaxy Advance phones are vulnerable. The bad news is that the full list of vulnerable phones isn't widely published. To reduce our risks we will need to perform some Internet searches (such as "samsung ussd") to discover if our Samsung phone is vulnerable. After this vulnerability became widely publicized, Samsung (to their credit) took some rapid corrective action, at least for some of their Galaxy phones. But in a way, this "corrective action" is like closing the barn door after the horse has bolted, since not all phones are updated automatically. Often we need to update our phones manually, which most people don't bother with (or don't know how to do).
Besides performing some Internet searches, we can also reduce our risk by assessing how our particular phone handles USSD codes. The easiest way to start this process is to use our phone's web browser to visit some harmless test pages set up by security companies and Android-oriented web sites (such as here and here. The results may be surprising. - Fact: All GSM cellular service providers and cell phones support "USSD" codes for administrative or informational purposes. USSD stands for "Unstructured Supplementary Service Data". In short, dialing a USSD code on our cell phone (just like we would dial a regular phone number) will cause our phone or our cellular service provider to take specific action. These actions include:
- Querying the available balance or minutes remaining on a pre-paid phone.
- Viewing advanced cellular network statistics (such as cell site information).
- Displaying the phone's International Mobile Equipment Identity (IMEI) number, which is a sort of serial number.
- Displaying or changing the "call forwarding" settings.
- Viewing wireless data usage statistics.
- And of course, resetting the phone to factory default settings.
USSD codes are not standardized; many are specific to a service provider or a phone. Many cell phones are designed to ask the user to confirm a USSD-generated action before proceeding. To mitigate risks related to USSD, we need to explore what codes exist for our particular combination of service provider and phone. This information is not easy to find; we will need to perform some Internet searches (such as "t-mobile ussd" or etc). Obviously, if we experiment with USSD codes on our phone we must be very careful! - Fact: "HTTP" and "HTTPS" are not the only URL/URI prefixes that desktop, laptop, tablet, and mobile web browsers know about. Everyone knows that typing "http://google.com" into our web browser will cause it to load Google's web page. "HTTP:" tells our web browser to use the HyperText Transfer Protocol when fetching Internet content. But most people don't know that Internet Explorer, Firefox, Opera, Chrome, Safari, and other web browsers are able to understand many additional URL/URI prefixes. It is one of those "other" URL/URI prefixes that allows USSD codes to potentially wreak havoc on a smarphone.
To reduce our risks, we need to understand just what our web browsers are capable of doing! Here are a few examples, some of which are supported only on specific platforms (such as smartphones):
- FTP (File Transfer Protocol). This protocol allows our web browser to access an FTP site on the Internet or on a local area network. Example: ftp://ftp.freshrpms.net.
- FILE. This protocol allows our web browser see our computer's internal disk file system. Examples: file:/// or file://c:/.
- TEL. This protocol allows our web browser to dial a telephone number or execute a USSD code (which is the basis for the vulnerability discussed in this blog entry). Example: tel:8885551212.
- SMS. This protocol allows our web browser to send a text message. Example: sms:8885551212.
- MAILTO. This protocol allows our web browser to send an e-mail message. Example: mailto:bill@nowhere.com.
- TELNET. This protocol allows our web browser to access a TELNET server on the Internet or on a local area network. Example: telnet://towel.blinkenlights.nl.
Try clicking on some of the harmless examples shown above, using various mobile and non-mobile devices. - Fact: Anyone can enter a potentially dangerous USSD code into our phone unless we take steps to prevent this. In every crowd there's always somebody who likes play tricks on other people, or get revenge for something. If we leave your phones unlocked, someone can quickly enter a USSD code and wreak havoc on our phones (or on our lives).
To reduce our risks we need to lock our phones when our phones are unattended. Automatically locking our phones is a good idea even when we're carrying them, as anyone who has been around children can attest. I know from experience that it only takes three seconds for a young child to suddenly grab a cell phone out of someone's purse or holster, turn it on, start an app, and run away at top speed.
And let's face it, our teenagers are Internet-smart and sometimes very angry at having to live within our rules. If they wipe our phones when our backs are turned, it's quite likely that we'll never be able to pin it on them.
You can read an original news article about this topic here. You can contact me here.
017
Malware Can Now Infect Virtual Machines That Aren't Running
Permalink
Brent LaReau, designsbylareau.com
Posted: Sept 19, 2012
I have lots of virtual machine (VM) files on various computers and external hard drives. VMs are incredibly useful for developing and testing software; for evaluating different operating systems (even Android!); and for learning how to install and configure applications before installing these "for real".
But ever since I started using VMs I've been concerned about malware infections. As far as malware is concerned, attacking a running VM is no different than attacking a running physical computer. Therefore, a running VM suffers the same infection vectors as a physical computer:
- Network (targeted by worms)
- Files (targeted by viruses)
- Web browsers (targeted by malicious web site content)
- Etc.
But now we have another infection vector to worry about: a new strain of malware called "Crisis" has been identified that can find and infect virtual machines that are not running.
(You can see all of my cartoons here.)
Let's check the facts of this case and derive some action items that we can use to reduce our risks:
- Fact: The "Crisis" malware can infect a VMware virtual machine that is shut down. A VM that is shut down (not running) is simply a very large file on a hard drive. VM files have an extension of ".VMDK", ".VHD", ".VDI", or ".IMG", depending on whether VMware, Virtual PC, VirtualBox, or QEMU is managing the virtual machine. Several software tools exist that can "mount" various types of VM files to gain read-write access to their internal filesystem. The Crisis malware is designed to call upon a utility feature provided with VMware Player to mount ".VMDK" files and alter their contents to install the Crisis malware inside the VM. What an easy way to install malware! No buffer overflow exploits or escalation-of-privilege attacks or corrupted PDF files are required; simply copy some bytes into the VM file and you're done.
To mitigate risks related to Crisis-like malware opening VM files, we need to put into place multiple layers of defense. The most common layer of defense against malware is anti-virus software, but in the case of VM files we need to absolutely make sure that we install anti-virus software in two places: inside the VM, and outside the VM. Installing anti-virus software inside a VM (within the guest operating system) is a security "best practice", but it may not be able to stop future Crisis-like malware. This will become clear in the next bullet point shown below. Installing anti-virus outside the VM (within the host operating system) may allow the initial Crisis-like malware infection to be detected before it can taint any VM files. But remember, anti-virus software is increasingly useless against brand-new malware, as I explained in a previous blog entry. Therefore, additional layers of protection are needed (see next). - Fact: If malware can alter a shut-down VM file it can also disable or remove anti-virus software installed inside that VM. Obviously, if a VM is not running then none of its defenses are running either. To reduce our risk, we can create another layer of defense against Crisis-like malware by using an Intrusion Detection System (IDS) tactic: we could detect unauthorized changes to shut-down VM files by determining if their checksum changes. For example, immediately after shutting down a VM we could calculate the VM file's MD5, SHA-1, or SHA-2 hash value. If malware like Crisis alters any bytes in the VM file its hash value will change. Re-calculating the VM file's hash value before starting the VM will clearly prove that the file did, or did not, change since it was last shut down. Shut-down VM files should not change unless they were corrupted, tainted, or deliberately mounted for legitimate purposes.
- Fact: The Crisis virus cannot infect encrypted virtual machines. At least, this was stated by VMware Inc. in one of their blogs. If true, then encryption is one way to mitigate the risk of a Crisis-like infection. Their blog mentioned that VMware Workstation software has a VM encryption feature (whereas the free VMware player does not). VMware Workstation costs around $250.
- Fact: Crisis can only infect ".VMDK" (VMware) VM files. Currently, Crisis is not able to mount and infect ".VHD", ".VDI", or ".IMG" files (that is, Virtual PC, VirtualBox, or QEMU virtual machine files). However, to reduce our future risks we need to assume that Crisis-like malware might some day be able to mount and infect other types of VM files. Defensive tactics such as outlined above can be used for all types of VM files (not just VMware files).
You can read an original news article about this topic here. You can contact me here.
016
Mom Changed Her Kids' Grades in School's Computer and Accessed School Employees' E-mails and Personnel Files
Permalink
Brent LaReau, designsbylareau.com
Posted: August 1, 2012
When I read the news story that prompted me to write this blog entry, I thought, OK, how many times do we need to read about something like this, to finally realize that apparently anyone—even fairly average 45-year-old moms—can and will gain unauthorized access to someone's computer system? We should ask ourselves this question: "What makes me think that one of my employees—or their moms or sons or cousins—won't gain unauthorized access to my computer systems even once, let alone 110 times like this mom did?"
I used to think people won't break into computers because it's unlawful and just plain wrong. But that didn't stop this mom, who later agreed her actions were unethical but she didn't think they were illegal.
And, if we ignore the human factor for a moment, do we really think that our small business computers are somehow magically more secure than those owned by Northwestern Lehigh School District (which is where "mom" worked)?
(You can see all of my cartoons here.)
Let's dissect the facts of this case and extract some action items that we can use to reduce our risks:
- Fact: "Mom" was a former employee of the school district. In other words, she used to be an "insider". To mitigate risks related to insiders leaving our organization, we need to set procedures and policies in place to immediately change passwords when an employee leaves the organization (even for a temporary leave of absence such as for maternity purposes). And, we need to occasionally remind employees to refrain from providing information to those who leave the organization. Updating our written policies is also a good idea.
- Fact: "Mom" used the school superintendent's password. We can reduce our security risks by ensuring that employees don't share passwords with other employees. There are two ways to share a password. The obvious way is to tell someone what it is. But the not-so-obvious way to share your password is to allow someone else to use a computer after you've logged on to it. Eliminating password sharing is best done by raising everyone's awareness through security training, and by privately reprimanding employees—even if they are upper managers—who are caught sharing passwords.
But our best efforts will have little effect if the daily chaos of our workplace all but demands that employees share passwords just to get their jobs done promptly. We may need to re-think our employees' workflow to prevent putting them in a bind if we institute a "no sharing" security policy. For example, suppose our line supervisors are always away from their computers because they're constantly fighting fires, so they delegate routine data entry tasks to their staff. Therefore, their staff uses their supervisors' passwords. A "no sharing" policy would place our supervisors and staff in a bind. One solution is to elevate a few staff members' login privileges so they can perform their usual data entry tasks in their own names instead of their supervisors' names. - Fact: "Mom" used the information of nine other employees to gain access to school district e-mails and personnel files thousands of times. Some people will blatantly access confidential data while they are still employed by an organization; others will do so only after they leave the company (perhaps because they think they're safe once they leave). To mitigate this risk we need to make sure employees cannot accidentally or deliberately discover other employees' login credentials. Let's be honest. At almost every company we can find laser-printed hardcopy of confidential documents (such as a password list) in someone's desk drawers. Or we can find confidential or sensitive spreadsheets on the flash drives that employees bring from home to work each day. Or, we can find employees sending confidential documents via their personal e-mail accounts, which means these documents can be retrieved from their "sent" items folder even long after they have left the company.
All of these represent data leaks. "Mom" probably found a password list somewhere, since it's unlikely that nine other employees gave her their login credentials. Therefore, we need to raise everyone's security awareness through training. Afterwards, we need to have a frank, non-threatening group discussion in which employees are encouraged to talk about how they may be currently in violation of these new principles, and what changes they could accomplish now that they know what's at stake. If employees are later found to be violating these requirements by keeping copies of sensitive information in unlocked desk drawers or on flash drives or in their personal e-mail accounts, they can be reprimanded privately. - Fact: Vigilant employees thought it was suspicious that the "superintendent" was accessing the school's grading system. Their call triggered an immediate shutdown of the school's computer system and district officials tightened security policies. We can learn from their quick action to reduce our own risks. We need to encourage employees to immediately report anything that seems unusual, and then immediately and appropriately react when they do so.
You can read an original news article about this topic here. You can contact me here.
015
Company Almost Fell Prey to Industrial Espionage via Flash Drives
Permalink
Brent LaReau, designsbylareau.com
Posted: July 19, 2012
After reading a recent news story about flash drives and industrial espionage, I wondered what my consulting clients' employees would do if they found a flash drive in their company's parking lot?
Let's face it: most employees would plug it into a computer to see what's on it. And then a naive little convenience feature in Windows would either automatically run software programs that Windows discovered on the flash drive, or Windows would kindly ask if the employee wished to allow such programs to run. And of course most people would instantly click "Yes" or "OK" without even reading the warning. The end result is that any malicious software (malware) residing on that flash drive would shout "Yippee!" and then install itself on the computer and happily begin its dirty work of uploading passwords and Excel spreadsheets to computers located in China.
(You can see all of my cartoons here.)
Is this a ridiculous scenario? But that's exactly what almost happened to DSM, a major chemical company in the Netherlands. One of its employees did find a flash drive tainted with password-stealing keylogger malware in the parking lot, but he was smart enough to immediately turn in the flash drive to DSM's IT department. In turn, IT staff members were smart enough to analyze what was on the flash drive instead of just plugging it in like an average person would. The spyware they found was designed to steal usernames and passwords and then upload these to a remote server on the Internet. They quickly blocked the remote server's domain or IP address on their network to prevent data leakage.
Using infected flash drives to smuggle malware into companies has become a regular occurrence in recent years, according to security researchers.
Let's analyze the facts of this news item and formulate some action items that we can use to reduce our risks:
- Fact: Industrial espionage is real, and today it's all digital. We need to realize that it's easier than ever to collect confidential information from inside a company. Including our company, whether large or small. In the old days someone had to hire a "spy" to somehow physically enter a company to collect information, and pay him handsomely to offset the risk of getting caught. Today anyone can just use Google, because a lot of organizations are foolish enough to upload confidential documents to their web sites. Most people don't realize that search engines can penetrate so deeply. You can see that for yourself by performing this Google search: "employee salary filetype:xls".
Using this search I was easily able to find a couple of large spreadsheets containing employees' full names, titles, and salaries. (People can also use Google to locate live video cameras owned and operated by various organizations that don't understand the term "default settings", but that's a different story.) To mitigate your own organization's risk of leaking confidential spreadsheets through its web site, try performing this Google search: "filetype:xls site:YOUR-DOMAIN", where "YOUR-DOMAIN" is your organization's domain name (such as "mycompany.com"). If you find anything you don't want to be publicly accessible, delete it from your web site, or talk to your webmaster about password-protecting such content.
But don't limit your searches to just spreadsheets. Google's "filetype" filter can be used to find LOTS of different filename extensions:
- Adobe Flash (.swf)
- Adobe Portable Document Format (.pdf)
- Adobe PostScript (.ps)
- Autodesk Design Web Format (.dwf)
- Google Earth (.kml, .kmz)
- GPS eXchange Format (.gpx)
- Hancom Hanword (.hwp)
- Microsoft Excel (.xls, .xlsx)
- Microsoft PowerPoint (.ppt, .pptx)
- Microsoft Word (.doc, .docx)
- OpenOffice presentation (.odp)
- OpenOffice spreadsheet (.ods)
- OpenOffice text (.odt)
- Rich Text Format (.rtf, .wri)
- Scalable Vector Graphics (.svg)
- TeX/LaTeX (.tex)
- Text (.txt, .text)
- Basic source code (.bas)
- C/C++ source code (.c, .cc, .cpp, .cxx, .h, .hpp)
- C# source code (.cs)
- Java source code (.java)
- Perl source code (.pl)
- Python source code (.py)
- Wireless Markup Language (.wml, .wap)
- XML (.xml)
- Fact: An employee found a flash drive in his company's parking lot. To learn from this news story, we need to realize that it could have been one of our own employees who found a (suspicious) flash drive in the parking lot. Therefore, we need to raise our own employees' awareness of this little scheme, so that they will be suspicious of any flash drives they may find. We must make sure to tell them that external hard drives fall into the same category. If someone is determined enough to steal company secrets they will just as likely risk a $50 hard drive as a $5 flash drive. And we must make sure to tell each new employee about this risk, so that new employees don't become a weak link in the security chain. And finally, we must tell employees not to take home any flash drives they find in the company parking lot. Why? Because many companies allow their employees to gain full access to the company network from home, using ordinary VPN connections over the Internet. If employees plug a tainted flash drive into their home computer and then log into the company network via VPN, it's almost as if they have plugged that flash drive directly into their computer at work.
- Fact: Their IT staff reacted immediately, conservatively, and expertly. For our own IT staff to do likewise, we need to provide them with whatever support they need to keep up with current information security risks; we also need to grant them authority to mitigate each risk immediately upon discovering it. It's easy to say these things, but let's look at how to put these action-items into practice. First, the quickest and least expensive way to keep up with current risks is to read several information security news feeds each week. If you don't know where to start, try The Register, Security News Portal, and The H Security. After learning about the latest risks it will be necessary to learn in-depth technical knowledge of such risks. The quickest and least expensive ways to do this is to read university research papers (which will be mentioned in many security news feeds), and to read white papers published by various organizations. You can start with the SANS Information Security Reading Room and branch out to other sources later.
Now, let's cover on-the-spot risk mitigation. The quickest and least expensive way to learn about mitigating security risks is to do so ahead of time. We can set up a sandbox with spare computers and network equipment that are similar to what is used every day at our organization. Creating images of these hard drives allows us to "reset" the sandbox if necessary. For each current risk we learn about, we can set up the sandbox to demonstrate that risk. Let's take the current news story as an example. Take any old flash drive and imagine that it's loaded with spyware or other malware, whether intended for industrial espionage or not. Should we just plug it into the sandbox and see what happens?
When someone loans me their flash drive or external hard drive, I am automatically suspicious of what it may harbor, so I never plug it into a Windows computer. I plug it into a GNU/Linux computer, which is immune to Windows malware because it cannot execute software that has been compiled to run on Windows. If someone donates an old flash drive or external hard drive to me, I use a Linux computer to reformat the device and establish a virgin file system on it before I dream of plugging the device into a Windows computer. So, maybe one of our sandbox computers should run Linux so that we can explore a suspicious flash drive safely. If we don't want to have a dedicated Linux box in our sandbox, we can instantly turn a Windows computer into a Linux computer by booting it from a Linux "Live CD" or "Live DVD" (or equivalent boot image on a flash drive). You can find a handy list of Live CDs and Live DVDs here. - Fact: The remote Internet server was immediately blacklisted by DSM's IT networking staff. They did this to protect DSM if any other employees found other, similar flash drives and plugged these into corporate computers. Smart thinking. We need to ask ourselves whether our own IT staff knows how to blacklist an Internet URL, domain, and/or IP address. If they don't, we need to provide them with whatever they need to learn how to do so, because this skill could be critical for other security risks that have nothing to do with rogue flash drives. Perhaps our IT staff could practice blocking IP addresses in the kind of sandbox I described above. Side note: if your company doesn't even have a dedicated IT staff, I strongly suggest you ponder how to protect yourself if your adversaries need only to use fairly simple technical skills to penetrate your company's perimeter and extract information via the Internet. In many cases, keeping a local IT consultant on retainer can help to mitigate risks, as long as he or she is knowledgeable about generic information security topics and is willing to respond fairly quickly.
You can read an original news article about this topic here. You can contact me here.
014
SWAT Team Raids Wrong Home Due to Unprotected Wi-Fi Network
Permalink
Brent LaReau, designsbylareau.com
Posted: July 9, 2012
The headline above is from an eye-opening news story I read recently. You can re-create this disaster in your own home or company. The ingredients are simple. First, go to Walmart, Best Buy, or CDW and purchase an inexpensive wireless access point or wireless router. Second, leave all of its settings in their default state and simply connect it to your DSL or cable modem (or your company's Internet connection). Finally, take any laptop computer, netbook, or smartphone and connect it to the Internet via your new wireless network. No password is required because you haven't enabled encryption!
The only problem is that your immediate neighbors (whether in an industrial park or at home), and even passers-by, can connect to your new open wireless network too. And then use your Internet connection for free. For whatever harmless or harmful purposes they want. And if authorities eventually track the harm to its source, they will naturally find that it's you. And if the harm is great enough, they will send in a SWAT team who will toss a couple of "flashbangs" (stun grenades) into your home or company to get your attention, and then interrogate or arrest your family or your employees.
OK, so open Wi-Fi is bad news. But those of us who HAVE protected our Wi-Fi networks at home and at work actually face the SAME risks as those who have open Wi-Fi networks. This is not obvious at all, so it will be explained in detail below.
Let's examine the facts of this story and derive some action items that we can use to reduce our risks:
- Fact: Someone had discovered the family's open Wi-Fi access point. To say that someone has "discovered" an access point may seem meaningless or trivial today, when access points are as common as rocks. But the rock is not important; what's under the rock is important. For the sake of argument, let's assume it could be risky if someone finds our Wi-Fi access point or router (whether protected or not). I'll explain why it's risky a bit later. To mitigate this risk, we first need to realize that there are three ways for someone to find our Wi-Fi network. Everybody knows the first way, which is to simply open a laptop computer, netbook, tablet, or smartphone, and see a list of nearby available access points. Unencrypted (open) networks will be clearly marked. But we must also be aware that anyone can purchase special Wi-Fi antennas that allow connections to wireless networks from a huge distance away. (We may trust our next-door neighbors not to abuse our wireless network, but do we trust the complete strangers who live or work four blocks away?)
The second way for someone to discover our wireless network is to walk—or drive—around our neighborhood or industrial park with a smartphone, netbook, or laptop computer. But is there a third way for someone to find our Wi-Fi access point, say, from across town, or from another city? You can answer that question for yourself by surfing to wigle.net. Click on the “Web Maps” link. There, you easily “zoom in” to see your city, your neighborhood, or your company. Each colored dot on the map represents someone's Wi-Fi access point. Can you see your home Wi-Fi access point? Your company's? Side note: all of the wireless networks you see on wigle.net were found by people who engage in a hobby called wardriving.
(You can see all of my cartoons here.)- Fact: The guy down the street had used the family's Wi-Fi to post threats against local police on topix.com. We need to realize that most bad guys don't want to be caught, so they will usually hide behind someone else, which is why the guy down the street deliberately used someone else's Wi-Fi. Sorry to mention this, but posting threats against the police is not as bad as it gets; bad guys often download—or upload—child p**nography via other peoples' Internet accounts through their open Wi-Fi connections. We do NOT want to be sucked into that mess. We need to ensure that our wireless access points—protected or unprotected—cannot be used by strangers. This is fully explained below (and the devil is truly in the details!).
- Fact: Authorities traced the threat on topix.com to the family. To reduce our risks, we need to understand that we are not anonymous on the Internet. The expression, "On the Internet, nobody knows you're a dog" is NOT TRUE. If someone uses our Internet connection they are really using our identity too. This is a form of identity theft. To understand just how exposed we are on the Internet, we need to see step-by-step how authorities traced the threat from topix.com to the family. To locate the source of the threat, police detectives used a fairly straightforward procedure. First, they contacted the administrator of topix.com and obtained the source IP address and time and date of the threatening post from the web site's visitor access log. Yes, every time we surf the web our public IP address is by default logged by every web site we visit (including designsbylareau.com). It's easy to translate this public IP address into geographic coordinates and determine who is our Internet Service Provider (ISP), too. You can see this for yourself here. Once the police had this information they merely contacted the ISP, who checked their logs to see which account holder was assigned this specific IP address at this specific time and date. Presto! The police now had a name and address to target.
We need to realize that the exact same process can—and does—happen inside organizations. Employee Internet access is usually tracked. Worse yet, corporate (and K-12) proxy servers usually parse HTTPS (encrypted) web site traffic to allow anti-virus scanning, content filtering, and data loss protection. Translation: they read all of our "secure" personal webmail and "secure" online banking in plain-text form. The little lock symbol on our web browser means nothing inside organizations. Now that we understand how everyone is tracked on the Internet, and how all of our "secure" communications are parsed word-for-word, we can mitigate these risks at home and at work. First, we must spread the word so that our employees and family members know these facts, too. If they realize everything they do on the Internet is recorded, they will likely limit their activities to protect themselves (and us!). Second, we must NOT allow others to share our Internet connection at home or at work. This includes visitors, friends, relatives, neighbors, fellow students, fellow employees, and especially strangers down the street. To deal with strangers down the street we need to ensure that our wireless access points are adequately protected. This is fully explained below. - Fact: Many other, similar, cases of raids on innocent owners of open Wi-Fi access points have been reported in recent years. We need to understand that having an open Wi-Fi network will tempt people to use it. Even having a protected network will tempt people to use it. It is easy to see why: a 2011 poll conducted for the Wi-Fi Alliance found that among 1,054 Americans age 18 and older, 32% acknowledged trying to access someone else's Wi-Fi network (whether protected or not). It seems that everyone is a hacker nowadays. We need to assume that people will try to break in to our wireless networks, and that they may succeed. This assumption will improve our risk mitigation strategy. If someone breaks in, what would they be able to do? Surf freely? Access our network shares? Get copies of our videos and music? Use our VMware vSphere Client to access our ESXi hypervisor? Basically, they would be able to do everything we can. Therefore, at home and at work, we need to isolate as many critical resources as possible. This usually means shutting off access to everything we don't really need, and password-protecting everything that's dear to us. For example, we could password-protect critical network shares, and encrypt our critical files. Unfortunately, this will make us jump through our own hoops, as we will need to use those passwords to access our own critical resources.
- Fact: The "open Wi-Fi" problem can be solved by encrypting the wireless network. If the family mentioned in this blog entry's headline had taken the time to enable encryption in their wireless router or Wi-Fi access point, there would have been no SWAT team, no news story, and this blog entry wouldn't exist! Seriously, encryption—where a password is required to gain access—is not rocket science, but using it does require learning about various encryption options. But we won't really know what encryption options are available unless we first learn about how to administer our Wi-Fi access point's (or router's) configuration settings. Exploring a device's configuration settings may sound trivial, but even inexpensive consumer-grade Wi-Fi access points have many screens of (poorly-explained) configuration settings. To successfully mitigate risks associated with incorrectly-configured Wi-Fi access points and routers, we need to give ourselves some time to learn about our Wi-Fi settings in general, and then give ourselves some more time to learn about its encryption options.
Here is a quick primer. Most wireless access points and routers offer several types of encryption. Usually, this is WEP, WPA, or WPA2. Each type may have its own sub-options, such as TKIP, AES, PSK, or EAP/RADIUS. Never use WEP; this will be explained below. Don't use WPA if WPA2 is available, as WPA2 is newer and stronger. Home devices will be using PSK (pre-shared key, sometimes called "personal") while enterprises may use both PSK and EAP/RADIUS. And we should never use a simple encryption password (more about this below). - Fact: Wi-Fi access points may permit interconnections between devices and computers. This refers to the fact that most Wi-Fi access points and wireless routers can connect to multiple wireless devices and multiple hardwired (Ethernet) devices at the same time, and may allow interconnections between any of these. This can place us at risk without us even knowing it. Here's a true story to illustrate this risk: some years ago I presented a seminar on network packet sniffing to a medium-sized company. This opened a few eyes and got several employees enthused about using network tools. A week later one of my seminar attendees called me from a hotel where he was traveling on business. He was quite happy to report that he had opened his computer's "network neighborhood" on the hotel's wireless network, and found several other guests' computers had open network shares. He explored their files and found a bunch of motorcycle photos on one guest's computer. (He was fond of motorcycles. If he had also found photos of someone's wife or girlfriend, at least he didn't mention it.)
To mitigate risks created by data leakage between computers on a wireless network, we need to explore our Wi-Fi access point's (or router's) configuration settings. Some units contain a setting to shut off connections between computers and/or mobile devices. You may also find several other settings that have to do with maintaining isolation between network connections. If our unit doesn't offer any applicable settings to achieve this isolation we will need to replace our unit with one that does offer such settings. - Fact: Wi-Fi access points may allow "remote administration". A wireless access point—even inexpensive consumer-grade devices—typically allows three basic ways to access its configuration settings. The most common way (which I never use) is to install the manufacturer's software on a computer. Personally, I don't trust "dumbed-down" consumer-friendly software; I recently saw the following instructions in a (name-brand) wireless router's user manual: "In the upper-right corner of the screen, check for the green light that indicates your router is online and secure. If the green light is on, no additional action is required to secure your network". Instead, I use the second way to configure Wi-Fi routers and access points, which is to bring up my web browser and access the device's built-in web server. Yes, almost all network-accessible devices—such as printers, copiers/scanners/FAXes, network storage devices, DSL modems, and VoIP phone systems—now have built-in web servers for administration purposes. (Oh, goody! Another set of vulnerabilities to worry about!) In some cases the Wi-Fi access point's built-in web server can be accessed from wireless devices as well as from Ethernet-connected computers. The third way to access Wi-Fi configuration settings is through remote access over the Internet. This allows us to configure, upgrade and check the status of our access point or router from, say, New Zealand.
Do we really need to access our Wi-Fi access point's configuration settings over the Internet? Allowing remote users to reconfigure our access point from anywhere on the Internet opens a huge security hole! Another, smaller, security hole is created when we allow mobile users connected to our access point to reconfigure it. Fortunately, we can mitigate these risks by disabling remote administration and mobile administration (two different settings). - Fact: Even encrypted wireless networks can be cracked into. This is what most of us don't want to hear. But we need to realize that simply enabling encryption and inventing a password for our Wi-Fi access point does not guarantee that no one else can use it. We need to consider the following points before rushing off to encrypt our Wi-Fi. First, don't use WEP—the oldest type of wireless encryption—with any Wi-Fi access point. Free WEP cracking tools can be easily found on the Internet, and these were used in 2009 by a "neighbor from Hell" to terrorize a family and threaten Vice President Joe Biden. Second, a Wi-Fi network can be cracked into regardless of how strong the encryption is, if the encryption password is naive like "mike" or "password1" or "letmein" or "nothingtoseehere" or "getyourowninternet". The teenager next door has all night to guess our password; there is no penalty for guessing, and our Wi-Fi access point has no way to notify us of her repeated hacking attempts. Third, even if we are using newer Wi-Fi encryption methods such as WPA or WPA2 (the best currently available) and we are using a fairly complex, hard-to-guess password like "m2c.nediknel" it may be possible for someone to determine our encryption password without even guessing. How is this possible?
Here is the ugly answer, step-by-step:
- Find the networks. Many free wireless tools can be found on the Internet, which people can use to identify and monitor our wireless network traffic. For example, these tools can easily find "hidden" networks that don't broadcast SSIDs. These tools are a necessary first step towards cracking our Wi-Fi password, but are not sufficient by themselves. You can check out these tools for yourself in BackTrack Linux, which is available as a LiveDVD so that you can boot it up on virtually any computer, laptop, or netbook.
- Grab the hash. Other tools can be freely downloaded (such as in BackTrack Linux) that can forcibly de-authenticate our computer, smartphone, laptop, or netbook on our wireless network, in which case our computer (or etc.) will automatically re-authenticate itself. And then the aforementioned tools will quite nicely grab a copy of the cryptographic hash sent by our computer (or etc.).
- Reverse the hash. Once our hash has been obtained, other free tools (such as in BackTrack Linux) can be used to mount an offline dictionary attack or even a brute-force attack against our hash to determine what password was used to create it. Such an attack can take minutes on a fast computer for a short, simple password, or hours for a slightly longer and/or slightly convoluted password. There are two ways to mitigate the risk of someone using a dictionary or brute-force attack on our Wi-Fi password. First, we can literally change our WPA/WPA2 password every day to make cracking our password very expensive to an attacker in terms of his time (for all he gets is one day's use of our network each time he cracks our password). But changing our password every day may be expensive to us, too, in terms of our time. Second, we can make sure to use only long (10+ characters), completely random WPA/WPA2 passwords consisting of a balanced mix of all four types of characters: uppercase alphabetic, lowercase alphabetic, numeric, and special characters (such as tilde, caret, punctuation, etc.). Example: "V7&U3zW7c-". This will prevent a dictionary attack from succeeding (although the attacker won't know that until his dictionary attack ultimately fails). When dictionary attacks fail, password-cracking tools automatically fall back to a time-consuming brute-force attack. The more characters in our password, the longer it will take to crack our hash.
- Harness the cloud. If the previous step fails, or if it simply takes too long, our hash can be uploaded to a cloud computing service such as cloudcracker.com, which will quite possibly determine our Wi-Fi password within 20 minutes for only $17. There are two ways to mitigate the risk of someone using a cloud service to crack our Wi-Fi password. First, we can literally change our WPA/WPA2 password every day to make cracking our password very expensive to an attacker in terms of his time and money. But doing so may be expensive to us, too, in terms of our time. Second, we can make sure to use only very long (15+ characters), completely random WPA/WPA2 passwords consisting of a balanced mix of all four types of characters: uppercase alphabetic, lowercase alphabetic, numeric, and special characters (such as tilde, caret, punctuation, etc.). Example: "Vs&U_1+bx5GC9hY". This will prevent a dictionary attack from succeeding, and will force the cloud-based cracking service to fall back to a time-consuming brute-force attack. The more characters in our password, the longer it will take to crack our hash.
- Log on. Once our password becomes known it is trivial for someone to log on to our wireless network without our knowledge. We can reduce this risk by changing our password often, or by checking our wireless access point's (or router's) connection log if one exists. Some inexpensive access points and routers don't log connections (logons) but do offer a real-time view of which wireless devices are currently connected. It's rather eye-opening to see two wireless clients connected when only one of those is yours.
- Fact: People may be able to remotely break into a Wi-Fi access point or router and simply grab a copy of the network password. No password cracking may be necessary. We need to understand that modern wireless access points and routers from reputable companies (Cisco, Buffalo, D-Link, and others) have a "convenience" feature known as Wi-Fi Protected Setup (WPS). This feature is supposed to make it easier to configure our Wi-Fi, but in fact a vulnerability was found in WPS that can allow an attacker to obtain full administrative access to our access point's configuration settings (including our WPA2 password). In 2011 a tool called Reaver was released that automates this process. Reaver can invisibly extract someone's WPA passphrase in less than 10 hours (usually 2-5 hours). To mitigate this risk, we need to either disable the WPS feature in our wireless router or access point (if that's even possible to do); or update the unit's firmware IF the manufacturer claims this will eliminate the WPS vulnerability; or replace our vulnerable wireless router or access point with another unit that doesn't have this vulnerability (or that allows WPS to be disabled).
You can read an original news article about this topic here. You can contact me here.
013
IBM Outlaws iPhone's Voice-activated Digital Assistant ("Siri") as It Leaks Data to a Third Party
Permalink
Brent LaReau, designsbylareau.com
Posted: June 4, 2012
After reading a news story about IBM and Siri I began to dig into the details. And the deeper one digs into this Siri phenomenon, the more interesting it gets!
According to Apple and Wikipedia, Siri is an iPhone 4S application integrated into the iOS operating system that "lets you use your voice to send messages, make calls, set reminders, and more. Just speak naturally. Siri understands what you say." Sounds very convenient. So, why is IBM preventing employees from using it? Will other companies follow suit?
The answer lies in the details of how Siri works. We must begin by understanding that a little iOS software application running on a little 800MHz dual-core CPU—which is significantly less powerful than a laptop computer's CPU—cannot decode speech. It takes more horsepower than that.
Next, we need to understand that Siri "knows" what we are talking about only by establishing a personalized context to interpret words within. For example, "mike" can be either a person's name or an abbreviation for "microphone". If we routinely call someone named "Mike" then Siri should know that "mike" is a person with a phone number. On the other hand, if we are a singer or musician then Siri should know that "mike" means "microphone". A little app on a little CPU cannot know these things.
As you may have guessed by now, Apple's Siri software merely transmits our speech, plus a lot of other information stored in our iPhone to establish context, to the cloud. Specifically, Siri transmits our context data, plus raw audio that has been compressed using the Speex audio codec, via the HTTPS protocol over 3G or WiFi to Apple's large data center in Maiden, North Carolina. There, powerful CPUs, large application software, and extensive databases can be harnessed to decode our speech in near real-time so that a prompt, accurate, and appropriate response can be sent back to our iPhone.
Therefore, Apple's personalized context database in North Carolina stores everything our iPhone knows about us, including our address book contents, our GPS coordinates, the names of songs we listen to...
I used to think that Google was incredibly intrusive, as they pretty much keep track of everything we do on the Internet each minute of the day. This is known to anyone who has placed a packet sniffer on their computer while they surf, or who has reverse-engineered the JavaScript code that Google runs on our computers for almost every web site we visit.
But now it sounds like Apple is as bad as Google, and we can understand why IBM uses Apple's Mobile Device Management (MDM) framework built into iOS to disable Siri on its employees' iPhones. IBM sees no good reason for a third party to have intimate details about their employees, family members, business partners, new projects, e-mail accounts, physical locations, or etc.
And consider this: could hackers obtain a copy of these intimate details by sniffing CDMA, GSM, or Wi-Fi packets, or by hacking into Apple's cloud? Hackers have surely figured out how to drain information from many large data sources; why not Apple's data center too?
Let's look at some facts for this topic and define some action items that we can use to reduce our risks:
- Fact: In 2012, the American Civil Liberties Union announced that Siri is "sending lots of our personal voice and user info to Apple to stockpile in its databases." Specifically, Siri transmits (a) our first name and nickname; (b) the names of all our address book contacts, their nicknames, and their relationship to us (such as "my dad" or "work"); (c) labels assigned to our e-mail accounts (such as "My Home Email"); and (d) names of songs and playlists in our collection; and other unspecified information. Now that we know Siri leaks this much data to third parties, we should ask ourselves what other devices and applications leak data without us knowing it. Here's the bad news: many other apps parse data from our smartphone and upload it to various servers on the Internet. In 2011, researchers found that 20% of iPhone apps send some of our confidential information to remote servers. Candidly, this sounds worse than what the research details actually reveal. But a few apps stand out for blatantly leaking data. One of the worst examples is the Path app, available for both iPhone and Android, which its company says is "the smart journal that helps you share life with the ones you love". In 2012, a software developer discovered that Path transmitted his entire iPhone address book (including full names, e-mail addresses, and phone numbers) to Path Inc's web server without informing the user or obtaining his or her permission.
Privacy issues, such as leaking data to third parties, are very important to some people but unimportant to others. Therefore, each of us will need to determine whether leaking data to third parties represents a risk to us or not. Those of us who think this is risky can take steps to manage that risk. For example, we can choose not to use Siri or Path (etc.); we can think about privacy before installing each new smartphone app; and we can remove apps that we don't really need. We can even attach a network packet sniffer to our Wi-Fi access point to see what each of our apps is transmitting to what remote server on the Internet.
(You can see all of my cartoons here.)- Fact: According to Apple's user agreement, "By using Siri or Dictation, you agree and consent to Apple’s and its subsidiaries’ and agents’ transmission, collection, maintenance, processing, and use of this information, including your voice input and User Data, to provide and improve Siri, Dictation, and other Apple products and services." How many people have used Siri without even knowing what Apple's user agreement says about it? It's not as if a message pops up the first time we use Siri. It seems that to manage our risks we need to somehow locate and study all of the license agreements applicable to all of the products and software we use. This is a fairly tedious and difficult job, as companies are under no obligation to make this easy. In many cases we get just one chance, when we "click through" the legal mumbo-jumbo displayed to us when using new products or services. And there is usually no way to copy and paste this text, or print it out for future reference. Side note: have you read your Kindle's user agreement? It's fairly amazing what you don't actually own when you purchase something!
- Fact: Lawyers and government agencies are currently using subpoenas to obtain user information from Facebook, Google, mobile phone companies, and automated tollway systems (I-Pass/E-ZPass). Most companies don't inform the public when information is requested about their customers. One exception is Google, as you can see here. Isn't it just a matter of time before someone subpoenas Apple to obtain user information uploaded by Siri? We need to understand the risks of providing third parties with confidential information about us and our organization, since this may work against us later during legal action. Edward Wrenbeck—the lead developer of the original Siri iPhone app—has stated, "Just having it known that you’re at a certain customer’s location might be in violation of a non-disclosure agreement."
- Fact: Google Voice Actions is a voice recognition system similar to Siri. Voice Actions currently runs on Android devices such as smartphones and tablets. Until we learn otherwise, we must assume that using Google Voice—or any other similar voice recognition method for mobile devices—would entail the same risk as using Siri.
You can read an original news article about this topic here. You can contact me here.
012
Does Your Industrial Control System Have a Back Door?
Permalink
Brent LaReau, designsbylareau.com
Posted: May 10, 2012
Back in February I had blogged about how 10,000+ industrial control systems were found to be connected to the Internet, even though this violates both "best practices" and vendor recommendations. Worse, only 17% of those systems required a password.
It didn't seem that things could get any worse than that. But now, according to a more recent news story, a researcher has discovered that one brand of industrial network switches and servers that are commonly used in control systems contains a "back door".
Industrial control systems are used everywhere in industrial sectors and critical infrastructures. They literally run the whole planet. Everything would screech to a halt if we simply unplugged all of them at once. Factories would cease production, sewerage treatment plants would back up, and we couldn't even buy a can of Coke.
It's a big deal when they don't work correctly. Aside from ordinary software bugs that can cripple industrial control systems, we're increasingly worried about hacktivism and other terrorist activities causing these control systems to fail in big ways. That's why the U.S. Department of Homeland Security created the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT). Its stated purpose is to conduct vulnerability and malware analysis, provide on-site support for incident response and forensic analysis, provide situational awareness (intelligence), coordinate responsible disclosure, and share information and threat analysis through published alerts. You can learn more about ICS-CERT here.
OK, so everyone is concerned about hackers breaking into industrial control systems. So why did RuggedCom put a backdoor in its Rugged Operating System, which is used in its industrial network switches and servers? A backdoor is a hidden means to gain remote access to a system or its software. Backdoors bypass normal authentication methods. And remote access through backdoors usually isn't logged anywhere, so access is entirely under the radar. Most users have no way to know if any backdoors exist in their systems or software.
You may recall that the 1983 movie War Games was based on the premise that a teenage hacker found a backdoor in a secret government computer system. Real-life hackers like to find and exploit backdoors in computers, systems, and software, too.
Let's break down the facts of this situation and develop some action items that we can use to reduce our risks:
- Fact: Accessing RuggedCom's backdoor requires trivial login credentials. The username is simply "factory" and the password is based on the switch's or server's network MAC address using an easy-to-duplicate algorithm. Those of us using RuggedCom's products are currently at risk; we need to take immediate action to determine what degree of exposure we face. If we discover that all of our RuggedCom products are behind VPN gateways or firewalls (with NO holes punched to allow administrative access to RuggedCom products) then all we need to do is alter our internal documentation and procedures to ensure that no holes to the Internet are opened up later. On the other hand, if we find that our RuggedCom products are exposed to the Internet, we can take appropriate steps to put our stuff behind a VPN gateway, and determine if hackers have already got in.
- Fact: RuggedCom's backdoor username and password cannot be changed, and the backdoor cannot be disabled. So, there is no "easy fix" here. Again, we need to take immediate action to determine our degree of exposure.
- Fact: The backdoor is found in all versions of the Rugged Operating System. Again, there is no easy fix (by simply upgrading device firmware). We need to assess our exposure.
- Fact: A search engine called SHODAN can find industrial control systems and their components on the Internet. Security experts claim that devices like laser printers and industrial control systems shouldn't be directly accessible on the Internet; they should be located behind firewalls and accessible only through encrypted and properly authenticated communications media such as VPN. Unfortunately, SHODAN has brought to light hundreds of thousands of such devices that are directly accessible. Such as "webcams, routers, power plants, iPhones, wind turbines, refrigerators, and VoIP phones" (according to SHODAN's web site). For example, in 2 seconds SHODAN found me a nice Hewlett-Packard V1910-48G network switch ("Software Version 5.20 Release 1108") at IP address 58.26.250.209, owned by TMnet Telekom Malaysia in Kuala Lumpur.
To mitigate our risks due to hackers breaking into our devices and systems, we need to know if some of these are actually on the Internet. We cannot assume that nothing of ours in on the Internet, for even the best IT staff and field engineers can make a mistake in configuring networks and devices they install on-site. We can reduce our risk by using SHODAN to search for our stuff. To make that easier, SHODAN allows searches to be filtered according to city, country, latitude/longitude, hostname, operating system and IP address. If we find anything, we can take appropriate steps to put our stuff behind a VPN gateway, and determine if hackers have already got in.
(You can see all of my cartoons here.)- Fact: Other vendors' industrial control systems also have security holes or even backdoors. For example, the venerable Siemens AG itself was highly criticized in 2011 for having a backdoor and hard-coded passwords in some of its industrial control system components. To reduce our total risk it appears we will need to examine all of our control devices to determine if any can be accessed remotely via the Internet. This will require some network engineering expertise.
You can read an original news article about this topic here. You can contact me here.
011
91% of Small Healthcare Practices in North America Suffered a Data Breach in 2011
Permalink
Brent LaReau, designsbylareau.com
Posted: April 16, 2012
While reading a news story about healthcare practices being hacked into, I remembered how my doctor and I had been in a rut for about three years. Once a year he'd say pretty much the same thing: "I see that you haven't signed up for online access to your medical records. Would you like to sign up? It's quick and easy. Then we can activate your online account and give you a temporary password. You can see all of your test results immediately instead of having to wait for us to mail them to you. And you can send me a message any time you wish."
And then I would count to 10 and give pretty much the same reply each time: "No, thanks. I don't want my personal medical records to be accessible on the Internet. I'm a consultant and one of my specialties is information security. I read about data breaches all the time and I'm familiar with how hackers gain access to computer systems."
And then he would pretty much offer the same rebuttal (sometimes with a slight frown): "I'm on the board that oversees our computer security. We've never had a hacker break in. Our web site uses a secure connection and it's password-protected. Most people use it and no one has reported a security problem."
To which I would always reply (after counting to ten again): "No, thanks. Just mail the information to me."
The interesting part is that even though we disagreed, he and I both had factual, self-consistent viewpoints that gave us confidence:
- Viewpoint: Most of my doctor's patients use his web web site to access their medical records. Perhaps my doctor thinks something is safe if everyone uses it. I think most people have that opinion, and maybe it works for them. But to me, popularity has nothing to do with security. For example, most people use Microsoft Windows, yet for many years Windows has been the most widely-attacked desktop operating system on the planet. And pickpockets don't stand on empty street corners; they mingle with crowds in popular spots.
- Viewpoint: My doctor is on the board that oversees his organization's computer security. I'm sure this gives my doctor additional insight into his organization's security posture, which seems to have increased his confidence. But I'm also sure that he is unaware of most of the details. Is he aware of how his organization does, or does not, mitigate the risks related to password lengths, password complexity, password truncation, password character types, "secret questions", password reset and recovery methods, URL hacking, session management, cookies, clickjacking, incorrect server settings, cryptographic hashes, SQL injection, input validation, lack of database query parameterization, cross-site scripting, cross-site request forgeries, firewall configuration, database "attack surface", network topology and configuration, physical security, etc?
- Viewpoint: My doctor's organization has never seen a hacker break in. I'm sure my doctor believes this to be true. But lack of evidence doesn't prove that a hacker did not break in! Most break-ins remain undetected because most people don't examine their own logs (firewall logs, web server logs, account access logs, anti-virus logs, operating system logs...). It would have been comforting if my doctor had said, "...but our security team does have to respond to some minor incidents once in a while. We got hit by a malware infection earlier this year, and our network guy saw an automated password-cracking attempt last month." In short, if we see "nothing" it means we're not even looking, or it means we believe that a lot of things like malware incidents and firewall events are not related to security!
- Viewpoint: My doctor's patients use a secure Internet connection (HTTPS) to access their medical records. I'm sure my doctor knows the difference between HTTP and HTTPS, and knows that HTTPS uses encryption to securely transmit data. But the devil is in the details. The fact that a login page is downloaded via HTTPS has no bearing on whether login credentials are actually "posted" via HTTPS too. Web designers have been known to make mistakes occasionally, like forgetting to append an "s" to "http" in a URL buried in a form's HTML coding. And even if login credentials are posted via HTTPS, does their web site mix both HTTP and HTTPS content into the same web page? That's a "no-no" according to The Open Web Application Security Project (OWASP).
And even if they don't mix HTTP and HTTPS content, do they naively encode the patient's ID into URL parameters (such as "https://doctor.com/medinfo.html?patient_id=123456"), which would allow other patients' medical records to be accessed by simply altering the patient ID string in the URL? This is called "URL hacking", and in 2011 hackers stole personal details of more than 200,000 Citigroup customers by using this technique. Citigroup's web is "secure" because it uses HTTPS, of course. And, even if naive URL parameters don't exist in my doctor's web site, a patient's username and password can be stolen whether HTTPS is used or not, if his computer is infected with a type of malware called a keylogger. So, what difference does having a "secure" Internet connection make, when naive web site coding mistakes or spyware can completely cripple that security? - Viewpoint: My doctor's online medical records are protected by a password. My doctor and I both know that online accounts are commonly protected by passwords, and his organization's web site is no exception. But a password is just a trivial part of the total security picture. Passwords are completely useless if a web site's back-end database can be easily accessed via hacker techniques such as SQL injection. Does my doctor's web site have better resistance to SQL injection attacks than web sites belonging to well-known security companies like Kaspersky, F-Secure, and Symantec? Those companies suffered data breaches due to SQL injection vulnerabilities several years ago.
- Viewpoint: None of my doctor's patients has reported a "security problem" with their account. I trust my doctor believes this to be the case. But lack of evidence doesn't prove that a hacker did not break into someone's account! Consider financial identity theft for a moment. A lot of people have no evidence that someone has stolen their identity until they get a copy of their credit report once a year. The fact that we have no way to obtain the medical records equivalent of a credit report is a real problem. Regardless, preventing medical identity theft is not easy because we have very little control over how third parties treat our medical data. The biggest way to potentially reduce our risks is to refuse to activate our online medical accounts (as I have). But how much does this actually reduce our risk? Unfortunately, that depends on how our medical organization's web site and its back-end database are implemented.
I once heard a security expert say, "I design the security for online banking web sites, and I won't do any online banking!" If a medical organization's entire patient records database is directly connected to the organization's web site, then refusing to activate one's online account won't help at all. That's because various hacker attacks—such as SQL injection—completely bypass the normal authentication mechanisms and gain direct access to the database. In that scenario, it does not matter if a patient's records are associated with an online account having a username and password or not. But on the other hand, if a medical organization's web site is completely constrained to only have access to the subset of patient records having an online account username and password, then refusing to activate one's online account will prevent a web site attack from accessing one's records. When I say "completely constrained" I don't mean that records are merely filtered by an SQL query's "where" clause. A hacker can run his own queries. Instead, I'm referring to deliberate design decisions that decouple the web site from the organization's main, central databases. In that case, the main, central databases will provide the web site with data only for records clearly marked with an online account username and password.
My doctor hasn't mentioned my lack of an online account recently. If he mentions it again, I'll send him a copy of the news article that prompted this blog entry ("Most Small Healthcare Practices Hacked In The Past 12 Months"). The article states that 91% of small healthcare practices surveyed in a North American study claim to have had a data breach during 2011. This was based on a survey of 700+ organizations with 250 employees or less. Examples of "small healthcare practices" are physicians' offices, dentists' offices, home healthcare services, health clinics and nursing care facilities.
(You can see all of my cartoons here.)
Dare we compare our own organizations with 250 employees or less, to the small healthcare organizations mentioned in the news article? Can we learn from their experiences to reduce our own risks of a data breach?
Let's study the facts of the news article and draft some action items that we can use to reduce our own organization's risks:
- Fact: 91% of small healthcare organizations in North America claim they had some type of data breach during 2011. We need to understand that a data breach is defined as a data loss or a data theft. Data loss occurs when a laptop computer is stolen, or a backup tape is lost, or paper records are put into a dumpster without being shredded, etc. A data theft occurs when someone (such as a hacker or a robber) deliberately steals data so that he or she can sell it, or use it to commit fraud. A data loss can lead to a data theft if someone finds the lost data and correctly identifies it as being valuable. To mitigate our risk of data loss we need to make a list of all the ways data can leak out of our organization, and then perform an audit to see what changes are required to prevent this. Data can leak out many different ways, and some are organization-specific. Here is a brief list of some typical leakage paths:
- "Meta-data" embedded in Microsoft Office documents, photos, and other disk files. Some lawsuits have had terrible setbacks due to meta-data.
- Stolen computers and mobile devices containing unencrypted confidential data.
- Confidential documents stored on an organization's public web site. You can see some examples by performing this Google search: "employee salary filetype:xls".
- Backup tapes or used hard drives that are lost, discarded, taken home, or sold on eBay.
- Paper hardcopy that is discarded without being shredded.
- E-mail accounts that are protected by a weak, guessable password. Remember Sarah Palin?
- Voicemail. Yes, people can hack into your voicemail.
- Flash drives or external hard drives that are lost, discarded, taken home, or sold on eBay.
Now, let's consider data theft. To mitigate our risk of data theft we need to make a list of all the ways data can be stolen from our organization, and then perform an audit to see what changes are required to prevent this. In the Internet Age we are tempted to focus only on electronic data, so it is easy to ignore all of the traditional ways data can be stolen. Data can be stolen many different ways, and some are organization-specific. Here is a brief list:
- Hackable web sites. Refer to my previous discussion of SQL injection and other security holes, above.
- Unprotected computers, mobile devices, backup tapes, flash drives, external hard drives (etc.) containing unencrypted confidential data.
- Confidential paper hardcopy that can be found just lying around.
- Fact: Only 31% of small healthcare organizations say their management thinks data security and privacy are a top priority. So, almost 70% don't think data security and privacy are a top priority. And if they did, merely thinking that something is a top priority doesn't mean something is actually being done about it. To mitigate our own organization's risks, we have to first make data security and privacy a top priority, and then we have to put our money where our mouth is. The only way to make something a top priority is to infuse this goal throughout executive management. This is easier said than done, for it requires a solid business case that compares the cost and benefit of NOT doing something to the cost and benefit of doing it. If the company's leaders don't buy in, the entire company won't buy in. After everyone buys in, stakeholders need to be identified and then each affected department needs to view every part of their day-to-day operations in a new light so that new plans can be made. Implementing those plans will not be quick or easy.
Security and privacy cannot be tacked on as an afterthought. We cannot manage security and privacy off the sides of our desks. We cannot purchase data security and privacy from vendors (despite what they will tell you). Security and privacy does not equal a new piece of software or equipment. For example, we cannot simply add encryption software to our database, only to find out later that our web site has a gaping SQL injection hole that (of course) is NOT blocked by encryption. And we cannot simply upgrade our firewall to achieve instant data security and privacy. Maybe the new firewall mitigates an extra 5% of our risk, but what about the other 95%? Does the new firewall prevent our used hard drives—which are still loaded with gigabytes of confidential data—from being sold on eBay? - Fact: About 70% of small healthcare organizations say they don't have (or don't know if they have) enough money in their budgets to meet risk management, compliance, and governance requirements. Unfortunately, it's difficult to prove any return on investment in security. There are two reasons for this. First, security is all about preventing something bad from happening. If something bad doesn't happen then there is no cost to clean it up, but the day-to-day preventive costs keep building up. Second, there is no industry agreement on risk factors and their cleanup costs, so it's hard to justify a proposed expense to mitigate a supposed risk. Therefore, instead of trying to prove an actual return on security investments, it's best if we tell business leaders that security investments are a normal, expected, and real cost of doing business. We will need to back that up with published facts about our industry, such as what our competitors are spending on security, or what price they paid after a security incident occurred. Another driver is whether our customers are concerned about our security (or lack of it). Our company's lack of security could drive them to our competitors.
- Fact: No one is responsible for overall patient data protection in more than one-third of small healthcare organizations. To mitigate our own company's security and privacy risks we will need to appoint an executive to be responsible for overall security. In many companies this executive's title is Chief Security Officer, and he or she is responsible for the company's entire security posture (both physical and digital).
- Fact: About 50% half of small healthcare organizations say that less than 10% of their IT budget goes to data security tools. It is difficult to say whether 10% is a lot or a little. We can spend our entire security budget on one huge enterprise-class suite, leaving no money left to redesign our web site to eliminate dozens of major security vulnerabilities. Or, we can encourage our IT professionals to use their knowledge plus free, open source tools to fill all sorts of security gaps. For example, a few years ago one of my consulting clients suffered a massive malware attack from the Conficker worm. I used Nmap (a network scanner) to find all of the infected computers on the premises, as well as to identify which computers were likely missing a critical Microsoft security update that would stop the worm.
In general, "data security tools" typically means security policy management utilities; reporting tools (such as for logs); intrusion detection and/or prevention systems (IDS/IPS); web application firewalls; conventional firewalls; encryption software; network analyzers; vulnerability scanners; anti-virus; forensics tools; data loss prevention (DLP) systems; Internet content filters; spam filters; compliance auditing tools; virtual private network (VPN) gateways; authentication servers; digital certificate management systems; public-key infrastructure (PKI) management tools; biometrics equipment; and etc. - Fact: Nearly 75% of small healthcare organizations say employees are permitted to access business or clinical applications via mobile devices (laptops, netbooks, smartphones, and tablets). In recent years a trend called "BYOD" (Bring Your Own Device) has caused IT departments to lose control over which devices employees use. This is touted as both a cost savings and a security risk. It is a potential cost savings because employees bear the cost of mobile devices and related service plans, not their company. But employee-owned mobile devices introduce security risks (and legal risks) that are hard for companies to mitigate. For example, IT departments place anti-virus software on all company-owned computers to mitigate the risk of malware infections. The problem with BYOD is that almost no one has anti-virus on their mobile devices, but those devices mingle freely with orthodox company computers on the network. And some mobile devices such as smartphones don't have enough CPU horsepower to run an anti-virus equivalent to what a desktop computer uses. Therefore, BYOD has reduced corporations' ability to deal with malware.
BYOD has caused some IT departments to spend extra time and money trying to constrain their risks. Instead of centrally managing every device connected to their network, some IT departments are building special "sandboxes" on their network to allow monitoring and control of employee-owned devices. Through some clever networking tricks like access control lists and enumeration of MAC addresses on the network, IT departments can keep employee-owned devices off the main network and force employees to connect their devices to the sandbox instead. IT departments can then monitor and audit the sandbox. If an employee-owned device is involved with a security incident, or causes network problems, the sandbox effectively isolates the main network from the problem.
You can read an original news article about this topic here. You can contact me here.
« Previous 10 | Next 10 »