What does it mean to “hack back” and is it a good idea?

There is more and more talk about companies hacking back against those who attack them in cyber space and whether allowing them to take such measures is a good idea. Right now, hacking back, or active defense, as it is often called, is illegal under the federal unauthorized access law, the Computer Fraud and Abuse Act. There are current federal efforts to change this, along with some woefully misguided rumblings by some state legislators (who do not seem to understand that the CFAA supersedes anything they pass to the contrary).

So, the question is whether hacking back a good idea or will it cause more harm than good? Shawn Tuma was a guest on the KLIF morning show to discuss this issue. Go here to listen to what he had to say about it.

What are your thoughts?

former employee = current data thief

Fifth Circuit Upholds CFAA Conviction for Former Employee’s Misuse Causing Damage Based on Circumstantial Evidence

In United States v. Anastasio N. Laoutaris, 2018 WL 614943 (5th Cir. Jan. 29, 2018), the United States Fifth Circuit Court of Appeals affirmed a jury verdict finding Laoutaris guilty of two counts of computer intrusion causing damage, in violation of 18 U.S.C. § 1030(a)(5)(A) and (c)(4)(B)(i) of the Computer Fraud and Abuse Act.

Laoutaris had been an IT engineer for Locke Lord LLP; following the termination of his employment, he accessed to the firm’s computer network and issued instructions and commands that caused significant damage to the network, including deleting or disabling hundreds of user accounts, desktop and laptop accounts, and user e-mail accounts. This post-termination access was without authorization. He was ordered to pay restitution in the amount of $1,697,800 and sentenced to 115 months’ imprisonment.

On appeal, Laoutaris argued that “the evidence at trial was insufficient to support the jury’s verdict for both counts of conviction because there was no proof he was the person who accessed Locke’s network and caused the damage that occurred on the relevant dates.” Further, Laoutaris had an expert testify that the attacks came from China.

The Fifth Circuit disagreed and found “[t]he evidence at trial shows a rational jury could have found each essential element for the § 1030(a)(5)(A) offenses charged against Laoutaris, who elected to testify. Contrary to his assertions, there was ample circumstantial evidence identifying him as the perpetrator of these offenses.”

The government’s brief indicates that the following evidence was admitted on this issue, beginning at page 6:

At trial, the government presented a substantial volume of circumstantial evidence identifying Laoutaris as the intruder. Logs created by the servers on the Locke Lord network showed that the intruder on December 1 and December 5 connected to the network using LogMeIn, which was installed on the HOBK01 backup server in Houston, and accessed the network using the credentials of a Windows “master services account” called svc_gn and its associated password. (ROA.1463-1515, 2835-47.) The IP address of the intruder on December 1 and December 5 was 75.125.127.4. (ROA.2768, 2835.)
That IP address was assigned to The Planet. (ROA.1077-79.) Laoutaris was an employee of The Planet at the time. (ROA.1068-70; see also ROA.2635-83.) Kelly Hurst, Laoutaris’s supervisor at The Planet, testified that the IP address was The Planet’s public wireless network at the Houston corporate office, which employees would be able to use while working out of The Planet’s corporate office. (ROA.1077-78.)
*7 Laoutaris was also associated with the LogMeIn software running on the Houston backup server. The software program was installed by a person who identified his email address as “c_hockland@hotmail.com.” (ROA.1304-07, 2848.) Records from Microsoft established that the account was created by “A.N. Laoutaris.” (ROA.2587.) Further, several Locke Lord employees testified that “c_hockland@hotmail.com” was an email address they knew to be associated with Laoutaris. (ROA.1306.) Additionally, Laoutaris’s personnel file included his resume, where he used the email address, and an email he sent on his last day providing c_hockland@hotmail.com as his forwarding email address. (ROA.2550.) Even after he quit, Laoutaris used that email address to send a message to a former colleague at Locke Lord making disparaging comments about the firm and his former supervisor. (ROA.2559-60.) Laoutaris continued using the email address as recently as July 2014, after he was indicted. (ROA.2681.)
The government also presented evidence establishing that Laoutaris had the password for the “svc_gn” account. The “svc_gn” account was the “master of all masters” account that had “no limits” on what it could do within the Locke Lord network. (ROA.1147.) IT engineers at Locke Lord explained that all of the engineers would from time to time use the “svc_gn” account when performing various tasks on the network and that all the *8 engineers had the password. (ROA.1147.) The jury heard evidence that Laoutaris asked for, and received, the password for the “svc_gn” account shortly before quitting the law firm. On August 10, 2011, a few days before Laoutaris quit, he requested the password from Michael Ger and Stan Guzic, two of the other IT engineers at Locke Lord. (ROA.2556-57.) Guzic testified that Laoutaris “constantly asked us for the password” and thus “to help him remember it, we used his name within the password itself” – specifically, “4nick8.” (ROA.1151.)
Not only was Laoutaris specifically tied to the December 1 and December 5 attacks, the government presented evidence tying him to at least 12 unauthorized intrusions into the Locke Lord network through LogMeIn. (ROA.2703-16, 2746, 2756, 2758, 2760, 2762, 2764, 2766, 2768, 2835, 2849.) Each of those intrusions originated from an IP address that was tied back to Laoutaris – either his home or his place of employment. (ROA.2703-16.)
The government’s brief also provides an excellent example of how to calculate a loss in a case such as this, beginning at page 12.

______________________

Shawn Tuma (@shawnetuma) is an attorney with an internationally recognized reputation in cybersecurity, computer fraud, and data privacy law. He is a Cybersecurity & Data Privacy Attorney at Scheef & Stone, LLP, a full-service commercial law firm in Texas that represents businesses of all sizes throughout the United States and, through its Mackrell International network, around the world.

Y2K18? Are #Spectre and #Meltdown the Y2K Apocalypse, Eighteen Years Late?

Hear Shawn Tuma interviewed on News Radio 570 KLIF – Experts: Update Settings and Download Updates to Protect from “Meltdown” and “Spectre”

CLICK HERE if you are impatient and only want to know what you should do ASAP to protect against Spectre and Meltdown

With Y2K we had a warning. So much of a warning that it pushed me into cyber law in 1998. We were told of an apocalypse if we did not heed the warning and fix the problem. Whether we did, or whether it was a lot of hype is still being debated, but the problem was averted. When the ball dropped on NYE 2000, the planes were still flying, power grid still operating, and banks still banking.

Fast forward eighteen years, NYE 2018, the ball drops and, while we are closing out a year when the word cybersecurity (yes, it is one word, not two) has become a part of everybody’s vernacular, the only thing we were thinking of when hearing the words “Spectre” and “Meltdown” was a James Bond movie marathon on New Year’s Day.

Just a few days later we are now talking about a global threat to the world’s computers — all of them from the most powerful supercomputers to, yes, even Apple computers, all the way to the computer you carry in your pocket (i.e., your smartphone) — that isn’t just a programming or software glitch, but is also a hardware problem, going to the very heart of the computer: it’s CPU.

The threat timing? Imminent — this isn’t something that is going to happen, this is something that has already happened and has just recently been discovered.

Now unlike with Y2K, the problem in and of itself will not directly cause a failure but is a vulnerability that has been exposed that will allow others — the bad guys (whoever they may be) — to exploit the vulnerability. But take no comfort in this because you can bet, to the bad guys, the revelation of this vulnerability made this exploit Target of Opportunity #1 for all.

The fix? This where it gets good. “Meltdown” can likely be mitigated with software patches, which programmers at major companies are fervently writing as I write. The problem is, these patches will lead to a degradation of computer performance by 20% to 30% — but they are not optional. You must install them.

“Spectre” is where it could get really nasty. This will likely require a redesign of the computer processors themselves — a wholesale hardware redesign that focuses more on security vis-a-vis performance. Then, in order to implement the fix, the hardware will have to be replaced — the CPUs in all of the world’s computers upgraded.

Sounds pretty bad, doesn’t it? Is this the real Y2K apocalypse arriving eighteen years late — Y2K18 or Y2K8teen? It could be.

But, if history is any indication it will not reach worst-case scenario levels, but things could still get really, really bad even if worst-case scenarios are not even on the radar. In fact, as this post is being written some researchers with clout are saying that the fix may not require the wholesale replacement of hardware — and I’m sure there will be more softening of this as we go along.

However, remember, “Wanna Cry” was only one exploit to a specific outdated Windows operating system that was revealed and had a patch issued for months before it actually hit. We all had better take this one seriously.

What can you do? When the patches come out from Microsoft, Apple, etc. and they tell you to install the patch to protect your computer, do it, immediately, and with a smile because losing 20% to 30% of your computing power is far better than losing 100%!

3 Legal Points for InfoSec Teams to Consider Before an Incident

secureworldAs a teaser to my presentation at SecureWorld – Dallas last week, I did a brief interview with SecureWorld and talked about three of the points I would make in my lunch keynote, The Legal Case for Cybersecurity. If you’re going to SecureWorld – Denver next week, join me for the lunch keynote on Thursday (11/2) as I will again be making The Legal Case for Cybersecurity.

In the SecureWorld article, Why InfoSec Teams Need to Think with a ‘Legal’ Mind, Before an Incident, we discuss these three points:

  1. There are three general types of “cyber laws” that infosec needs to understand;
  2. Sadly, far too many companies do not take cybersecurity seriously until after they have had a significant incident; and
  3. Companies’ need for implementing and continuously maturing a cyber risk management program (such as my CyberGard).

 

What do we in the United States really want from our cyber laws?

In my newsfeed are articles in prominent publications discussing the problems with the federal Computer Fraud and Abuse Act from very different perspectives.

www.businesscyberrisk.comIn the “the CFAA is dangerous for security researchers” corner we have White Hat Hackers and the Internet of Bodies, in Law360, discussing how precarious the CFAA (and presumably, the state hacking laws such as Texas’ Breach of Computer Security / Harmful Access by Computer laws) and Digital Millenium Copyright Act can be for security researchers.

In the “the CFAA prevents companies from defending themselves” corner we have New Bill Would Allow Hacking Victims to ‘Hack Back’, in The Hill, discussing The Active Cyber Defense Certainty Act (ACDC). ACDC (what a great acronym!) would allow companies more latitude in defending themselves against those intruding into their networks by permitting them to use techniques described as “active defense,” under certain conditions, though not permitting companies to counterattack.

Now, instead of thinking about these two measures in isolation, think of them together. What if we were to get both of them passed into law? What if we got one or the other?

This reminds me of a piece I wrote about the CFAA and the broader national policy discussion a few years ago, Hunter Moore or Aaron Swartz: Do we hate the CFAA? Do we love the CFAA? Do we even have a clue? In that piece I stated,

The CFAA has become a national lightening rod with many loving it, many hating it, and far too many loving it and hating it at the same time, without even realizing it. Before we go any further, however, consider this quote:

The CFAA was tailor-made to punish precisely the kind of behavior that [guess who?] is charged with: breaking into other people’s accounts and disseminating their … information.

Quick! Who is that referring to? Hunter Moore? Edward Snowden? Aaron Swartz? Sandra Teague?

I used this overly simplified example to try and make a point that, philosophically, we as a nation need to stop looking at each of these cases and laws in isolation and need to look at the bigger picture of how it all fits together. Picking and choosing based upon our own personal likes and dislikes due to the emotional tug of the facts is no way to develop, maintain, and mature a body of law on any subject matter — much less one as complicated as cyber.

Take this discussion and add into the mix new security-based laws such as NYDFS and then mix in the 48 states + HIPAA, GLBA, etc. breach notification laws, the conundrum of cybersecurity law schizophrenia, and then see what we have to work with. Does it all make sense?

What do you think? Where do we begin? Who needs to be involved in working this out? What are the first questions we need to ask?