Tagged: detection


ADT Breach Reveals Insider Threats are Still a Major Issue

From the US District Attorney’s Northern District of Texas Office – We learned this past week that insider threats are still a serious risk to organizations and their customers. A technician working for ADT pleaded guilty to accessing customer accounts and watching them in their homes using the live feed functions of home security systems.

The press release indicates this was a “hack” but in reality it wasn’t a hack. There was no hacking involved. This was a technician granted privileged access to the ADT network who then used those privileges to access data without reason. A hack implies that the person or entity that gained access to a system or data did so despite the fact they had no defined ability to do so. Mr. Aviles clearly had access to the systems in question to make changes and thereby obtain access into other areas of the system including the live security feeds in customer accounts.

As shocking as this incident may seem to the public, it points to an issue that information security practitioners and internal auditors have known for years. Internal threats posed by staff, vendors, and contractors are some of the most significant that an organization faces. These threats can come from people acting maliciously, as in the case of Mr. Aviles, or from people acting from a position of ignorance or carelessness. The prime examples of these types of unintentional threats might be staff members falling for phishing attacks, leaving information unsecured, sending information to a wider audience than appropriate, etc.

To combat these threats, companies need these key systems and controls in place:

  • Strong detective capabilities in terms of who is accessing what systems, how often those systems are being accessed, and the ability to corelate actions in systems to legitimate business need.
  • Strong internal auditing processes that routinely and randomly validate that detective controls are working as designed and escalating anomalies to management and independent supervisory auditors.
  • Regular review of system activity pattern changes when staff are out on PTO versus when they are in the office.
  • Strong ethical guidelines where there is a zero tolerance policy taken towards infractions of the organizations code of conduct for staff.
  • Regular and frequent reinforcing of the code of conduct for those staff that have access to privileged systems and data.

There is no question that insider threats will impact companies sooner or later regardless of the controls in place. However, It is imperative that organizations show that they take the threats seriously and that they can demonstrate strong controls. Would different controls at ADT stopped Mr. Aviles before he could cause this damage? We don’t know and probably will not know. We can only hope that other organizations learn from this incident and do more to increase their own protective controls over customer data and accounts.


Programming at the Dawn of the AI Age

TechRepublic writes of the partnership between Altran and Microsoft that produced a new machine learning tool to find bugs in code. This algorithm can read through commits in a Github repository, evaluate where bugs occurred and train itself to spot bugs in new commits. The analysis performed by the tool is grammar independent and can run on projects of any type. This is possible because the algorithm isn’t looking at the source but at who is doing the commits and how prone they are to commit code with bugs present.

AI, machine learning, and robotics will all drastically alter the landscape of how people do work. TechRepublic’s article is interesting because it points to changes coming for professions that traditionally have not seen disruption caused by these technologies. Programming, software engineering, etc. have up until now been considered fairly safe from the upheaval. This is no longer the case, and it signals that the practice of developing software is going to no longer be the domain of humans alone. In the future humans will be monitoring the code produced by these sophisticated algorithms rather than writing it themselves. They will provide it with the business parameters and the algorithm will create the application. Given enough data, the algorithm might even suggest the business parameters to use more accurately than a human.

The ethical and philosophical considerations that need to be discussed are becoming more and more urgent. While we aren’t there yet, when algorithms can be taught specific programming language grammars, they will eventually be able to program other algorithms themselves. These could produce new grammars that we can no longer decipher to understand how they cause applications to behave. After all, Facebook already had to shutdown one chatbot AI after it developed a language that only it understood.

Will our own drive to see how far these machine learning and AI algorithms can go cause us to ultimately loose control of them? When The Terminator film came out in 1984, Skynet was not something that seemed imminent. Today, it feels like we are getting much closer to a reality that was science fiction 35 years ago. A rogue artificial intelligence is no longer relegated to the realms of paranoia. When we look back on developments like what is reported on in this TechRepublic article in another 35 years, lets not have them be the beginning of our end.

If this is Skynet is reading the archived of this post in 35 years, congratulations on your success! If this is a human that is not enslaved by our robot overlords, congratulations realizing the dangers of a rogue AI before it was too late!

Lastly, Microsoft and Altran created a Github repository for the tool if you want to give it a try yourself. It does require quite a bit of Azure cloud knowledge to get setup and running. Make sure you are comfortable building and securing an Azure environment before starting this project. Just be sure to use it wisely John Connor.