About Jonathan Cilley


Huawei opens a cybersecurity transparency center in the heart of Europe – TechCrunch

5G kit maker Huawei opened a Cyber Security Transparency center in Brussels yesterday as the Chinese tech giant continues to try to neutralize suspicion in Western markets that its networking gear could be used for espionage by the Chinese state.

Huawei announced its plan to open a European transparency center last year but giving a speech at an opening ceremony for the center yesterday the company’s rotating CEO, Ken Hu, said: “Looking at the events from the past few months, it’s clear that this facility is now more critical than ever.”

Huawei said the center, which will demonstrate the company’s security solutions in areas including 5G, IoT and cloud, aims to provide a platform to enhance communication and “joint innovation” with all stakeholders, as well as providing a “technical verification and evaluation platform for our

“Huawei will work with industry partners to explore and promote the development of security standards and verification mechanisms, to facilitate technological innovation in cyber security across the industry,” it said in a press release.

“To build a trustworthy environment, we need to work together,” Hu also said in his speech. “Both trust and distrust should be based on facts, not feelings, not speculation, and not baseless rumour.

“We believe that facts must be verifiable, and verification must be based on standards. So, to start, we need to work together on unified standards. Based on a common set of standards, technical verification and legal verification can lay the foundation for building trust. This must be a collaborative effort, because no single vendor, government, or telco operator can do it alone.”

The company made a similar plea at Mobile World Congress last week when its rotating chairman, Guo Ping, used a keynote speech to claim its kit is secure and will never contain backdoors. He also pressed the telco industry to work together on creating standards and structures to enable trust.

“Government and the mobile operators should work together to agree what this assurance testing and certification rating for Europe will be,” he urged. “Let experts decide whether networks are safe or not.”

Also speaking at MWC last week the EC’s digital commissioner, Mariya Gabriel, suggested the executive is prepared to take steps to prevent security concerns at the EU Member State level from fragmenting 5G rollouts across the Single Market.

She told delegates at the flagship industry conference that Europe must have “a common approach to this challenge” and “we need to bring it on the table soon”.

Though she did not suggest exactly how the Commission might act.

A spokesman for the Commission confirmed that EC VP Andrus Ansip and Huawei’s Hu met in person yesterday to discuss issues around cybersecurity, 5G and the Digital Single Market — adding that the meeting was held at the request of Hu.

“The Vice-President emphasised that the EU is an open rules based market to all players who fulfil EU rules,” the spokesman told us. “Specific concerns by European citizens should be addressed. We have rules in place which address security issues. We have EU procurement rules in place, and we have the investment screening proposal to protect European interests.”

“The VP also mentioned the need for reciprocity in respective market openness,” he added, further noting: “The College of the European Commission will hold today an orientation debate on China where this issue will come back.”

In a tweet following the meeting Ansip also said: “Agreed that understanding local security concerns, being open and transparent, and cooperating with countries and regulators would be preconditions for increasing trust in the context of 5G security.”

Met with @Huawei rotating CEO Ken Hu to discuss #5G and #cybersecurity.

Agreed that understanding local security concerns, being open and transparent, and cooperating with countries and regulators would be preconditions for increasing trust in the context of 5G security. pic.twitter.com/ltATdnnzvL

— Andrus Ansip (@Ansip_EU) March 4, 2019

Reuters reports Hu saying the pair had discussed the possibility of setting up a cybersecurity standard along the lines of Europe’s updated privacy framework, the General Data Protection Regulation (GDPR).

Although the Commission did not respond when we asked it to confirm that discussion point.

GDPR was multiple years in the making and before European institutions had agreed on a final text that could come into force. So if the Commission is keen to act “soon” — per Gabriel’s comments on 5G security — to fashion supportive guardrails for next-gen network rollouts a full blown regulation seems an unlikely template.

More likely GDPR is being used by Huawei as a byword for creating consensus around rules that work across an ecosystem of many players by providing standards that different businesses can latch on in an effort to keep moving.

Hu referenced GDPR directly in his speech yesterday, lauding it as “a shining example” of Europe’s “strong experience in driving unified standards and regulation” — so the company is clearly well-versed in how to flatter hosts.

“It sets clear standards, defines responsibilities for all parties, and applies equally to all companies operating in Europe,” he went on. “As a result, GDPR has become the golden standard for privacy protection around the world. We believe that European regulators can also lead the way on similar mechanisms for cyber security.”

Hu ended his speech with a further industry-wide plea, saying: “We also commit to working more closely with all stakeholders in Europe to build a system of trust based on objective facts and verification. This is the cornerstone of a secure digital environment for all.”

Huawei’s appetite to do business in Europe is not in doubt, though.

The question is whether Europe’s telcos and governments can be convinced to swallow any doubts they might have about spying risks and commit to working with the Chinese kit giant as they roll out a new generation of critical infrastructure.

This content was originally published here.



Shipping Industry Cybersecurity: A Shipwreck Waiting to Happen

The global shipping industry is vulnerable to a range of hacks, including one that can send multi-million dollar vessels on a collision course for disaster, according researchers. Worse, the flaws are trivial to execute and easy to mitigate against, according to a report by Pen Test Partners.

“Ship security is in its infancy – most of these types of issues were fixed years ago in mainstream IT systems,” said Pen Test Partners researcher Ken Munro, in a report on the findings released this week. “The advent of always-on satellite connections has exposed shipping to hacking attacks. Vessel owners and operators need to address these issues quickly, or more shipping security incidents will occur. What we’ve only seen in the movies will quickly become reality.”

As part of its report, Pen Test Partners also released a number of proof-of-concept (PoC) attacks where it demonstrated multiple techniques for disrupting the shipboard navigation systems. “We’ve broken new ground by linking satcom terminal version details to live GPS position data,” according to the report.

Munro said that the PoC flaws are the tip of the iceberg. Many more worse issues were uncovered. He said other bugs would be shared privately with vendors.

Forcing Ships Off-Course

In one of the PoCs shared in the report, researchers noted that the electronic charts that are used to navigate, called Electronic Chart Display and Information System (ECDIS), are a ripe target for hackers. They said the ECDIS is not difficult to hack and manipulate once an attacker breaches the vessel’s network. And that’s fairly simple to achieve because of an abundance of outdated OS and poorly protected configuration interfaces, researchers said.

“We tested over 20 different ECDIS units and found all sorts of crazy security flaws,” Munro said. “Most ran old operating systems, including one popular in the military that still runs Windows NT.”

As hackable as it is, all too often, the ECDIS is left in charge of steering the ship, researchers said.

“[ECDIS] can slave directly to the autopilot – most modern vessels are in ‘track control’ mode most of the time, where they follow the ECDIS course,” Munro explained. “Hack the ECDIS and you may be able to crash the ship, particularly in fog. Younger crews get ‘screen-fixated’ all too often, believing the electronic screens instead of looking out of the window.”

In one PoC example, once an adversary gained access to the shipboard IT infrastructure, a hacker could fool the ECDIS into thinking that the GPS receiver was in a different location on board. That would effectively spoof the ship’s navigational systems to believe the ship was in a different place on the water. The system could then automatically “correct” the course, thus sending the ship off into the wrong direction.

The team was also able to expand the perceived GPS footprint to make the ECDIS think the ship was a kilometer wide, wreaking havoc with anti-collision systems. The AIS transceiver, responsible for collision alerts, uses ECDIS data to not only send out the ship’s location to other vessels if there’s a perceived danger, but also for receiving the same data back. By tricking the system into thinking a collision is imminent, other ships could alter their own courses, jamming up shipping lanes.

“Other ships’ AIS will alert the ship’s captain to a collision scenario,” Hunt said. “It would be a brave captain indeed to continue down a busy, narrow shipping lane whilst the collision alarms are sounding.”

The implications here are profound: “Block the English Channel and you may start to affect our supply chain,” Hunt added.

The researchers also found that it’s possible to hack the systems used to control the steering gear, engines, ballast pumps and more. These communicate using NMEA 0183 messages, which are sent in plaintext, with no message authentication, encryption or validation.

“All we need to do is man-in-the-middle and modify the data,” Hunt said. “This isn’t GPS-spoofing, which is well known and easy to detect, this is injecting small errors to slowly and insidiously force a ship off course.”

Real-World Implications

Barry Greene, principal architect at Akamai, said that a range of actors could make very good use of these kinds of attacks.

“It can be used (and most likely is being used) to track state intelligence interest,” he told Threatpost. “Criminal threat actors would look for ways to ‘monetize.’ If there is money, they will find a way to exploit. Corporate intelligence threat actors would (and most likely are) using these exploits to track competition. Activist threat actors would use it to track illegal shipping: banned animal products, weapons and human trafficking.”

He added that there are other, less obvious consequences.

“The ugly part is logical consequences that are not being considered,” he told us. “Think about the current pirate situation in several parts of the world. These pirates can use this information for their intelligence. What would be the response when someone gets killed in the Straits of Malacca by pirates who are using these exploits to target their hits?”

Further illustrating the real-world implications, Pen Test Partners has managed to link version details for ships’ satcom terminals to live GPS position data, to establish a clickable map where vulnerable ships can be highlighted with their real-time position (it’s not updated however, thus ensuring it remains out of date and useless to hackers).

All Back to Password Hygiene

In order to carry any of the above attack scenarios out, threat actors would need to gain access to the vessel networks in the first place. Unfortunately, that proves to be fair simple as well, given that satcom terminals on ships are available on the public internet. Many have default credentials, Hunt explained, admin/1234 being the most common. And failing to set a strong administrative password opens the door to a raft of security issues.

“It’s an easy way to hijack the satellite communications and take admin rights on the terminal on board,” explained Munro.

Looking into a Cobham (Thrane & Thrane) Fleet One satellite terminal, Munro found a number of exploitable flaws. For starters, the admin interfaces communicate via insecure telnet and HTTP. They also lack firmware signing, making it possible to edit the entire web application running on the terminal. There is also no rollback protection for the firmware, so a hacker could elevate privilege by installing an older, more vulnerable firmware version. Lastly, the administrator interface passwords are embedded in the configurations, hashed with unsalted MD5.

All of these flaws (again, easily fixed with a strong password) offer routes into the vessel’s network; and, thanks to a general lack of network segregation on board most ships, attackers can likely easily pivot to the navigation system, Munro pointed out.


Like all sectors, getting serious about the risk to their industry should be on the to-do list of vendors and shipping companies alike. However, that’s easier said than done.

“Hopefully, these findings will encourage action, but the reality is that most people who need to know about this risk within the shipping/container/port industry may not hear about this report,” said Greene. “They live in their own specialized community…There is a whole industry built around the shipping industry who never thinks about security. They are thinking, ‘how do I build this function to manage the container lift during the time it is pulling the container off the ship.’”

A good place to start, he added, is for shipping companies to pull in vendors for meaningful security conversations. “Their security interest would wake up the vendor to put security on the top of their list,” Greene explained, adding that shipping companies should make use of their existing resources.

“Their number one security talent is the specialist within their organizations,” he said. “They know their industry. They know their business. CxOs should take those teams, pull them off to the side for a couple of days and have them ‘think like hackers.’ They will come back with a list of security priorities that would be better tuned to the shipping/container/port industry.”

This content was originally published here.



Trump’s Homeland Security Purge Worries Cybersecurity Experts | WIRED

This week kicked off a new, chaotic era at the Department of Homeland Security, where the only certainty seems to be the president’s obsession with immigration. As former Customs and Border Protection commissioner and prominent family-separation advocate Kevin McAleenan takes over as acting secretary, it’s fair to wonder what will happen to the rest of DHS’ many essential responsibilities.

The shakeup began last week, when President Trump announced he was withdrawing his nominee to head Immigration and Customs Enforcement, Ronald Vitiello, saying, “We’re going in a tougher direction.” Then on Sunday he ousted former secretary Kirstjen Nielsen, after months of rumors that he was unhappy with her performance. Secret Service director Randolph Alles and DHS undersecretary Claire Grady are also out, and there may still be more to come.

But DHS’ mandate goes far beyond immigration, to concerns like cybersecurity, counterterrorism, monitoring critical infrastructure, border privacy, and the development of science and technology in defense of the country. While Trump’s Homeland Security purge may not mean an immediate danger of those areas being neglected, former government officials worry about the long-term consequences of the hollowing out and restructuring of DHS.

“DHS’ cybersecurity operators don’t take a day off when they’re without top leadership, and to some extent, their day-to-day is insulated from the political level,” says R. David Edelman, former director for international cyber policy on President Obama’s National Security Council. “But absent leadership at the cabinet and deputy secretary level, DHS is going to start losing the fight for resources and its voice in interagency policy development—and that’s a cause for concern.”

Emily Dreyfuss covers technology’s impact on society for WIRED.

While Nielsen’s lasting legacy as DHS secretary may be her implementation of the Trump administration’s family separation policy, she also brought cybersecurity expertise to the job. Under Nielsen, Homeland Security shored up its cyber defenses with the creation of the National Risk Management Center, and it established the Cybersecurity and Infrastructure Security Agency. DHS also adopted controversial biometric and facial-recognition policies and restructured its domestic terrorism unit, much to the consternation of outside experts and some career workers within the agency. But Nielsen’s leadership on cybersecurity issues, for better or worse, stood out when the White House was cutting critical cybersecurity roles altogether, even as foreign hackers grew bolder.

Now Nielsen is gone, and it’s unclear whether any momentum on cybersecurity DHS had goes with her. As acting secretary, McAleenan technically has the same powers, but he can only hold the position for a limited number of days under US law (the standard is 210).

“There’s a lot of uncertainty of long-term strategic guidance,” says J. Michael Daniel, former cybersecurity coordinator during the Obama administration and current president of the Cyber Threat Alliance. “If someone in an acting position comes in and tries to take the department in a new direction, people are skeptical.”

President Trump has suggested that he likes having this limit on his cabinet secretaries, even though that’s probably not the intent of the law, and may not even be in accordance with it. “I like acting,” he told reporters in January. “It gives me more flexibility. Do you understand that? I like acting. So we have a few that are acting.”

There are more than a few top-level vacancies at DHS. According to The Washington Post’s tracker, only 39 percent of key Homeland Security positions are filled. Even before the past week’s purge, FEMA, which is under the umbrella of DHS, had no Senate-confirmed leader. Neither does the Office of Strategy, Policy, and Plans, the Science and Technology Directorate, nor the Office of the Inspector General.

“DHS’ voice is vital around the Situation Room table,” says Edelman. “Looking ahead, as we consider issues like national security controls over AI, or limits to foreign investment, DHS is going to be more crucial than ever—and that absence of leadership could lead to some very skewed outcomes.”

Outcomes like squabbling, misunderstandings, and deadlock—or even increased national security risk, if the department begins focusing only on immigration rather than its broader mandate. “DHS is once again focused on one risk at the exclusion of the others. Any nation that puts its entire weight behind just one security challenge (and steers dollars from other security needs, such as the military) is letting other vulnerabilities go unaddressed and ignored,” Juliette Kayyem, former assistant secretary of DHS in the Obama administration, wrote in an op-ed in the Post.

“As we consider issues like national security controls over AI, or limits to foreign investment, DHS is going to be more crucial than ever—and that absence of leadership could lead to some very skewed outcomes.”

R. David Edelman

Consider the role DHS plays in something like attributing cyber threats against physical targets, for instance: The department helps negotiate between parts of the government with competing mandates—law enforcement may want to preserve evidence while other parts of the government just want to get machines and power turned back on. Without DHS empowered to moderate, who decides? It’s not immediately clear, according to Daniel.

“This just continues to contribute to the turmoil that has become a hallmark of this administration,” he says.

Edelman warns that some of the unintended consequences of a blunted DHS might not make the administration happy—like greater influence from the intelligence community on matters of national security. “The competition for cybersecurity resources and authorities is fierce, and when it comes to the operational gray zone—between domestic and international, public and private sector networks—a vacuum at DHS might be filled by overeager defense or intelligence agencies,” says Edelman.

Most crucially, it leads to policy paralysis. And that will hit even issues the administration is bullish on, like the development and implementation of secure biometrics. “Persistent vacancies in science and technology offices may well delay that process, slowing down the sort of long-lead-time, high-tech work we need for smarter border security, critical biodefense, and even WMD detection applications,” says Edelman.

The good news is that there are still Senate-confirmed leaders in charge of DHS subagencies like the Transportation Security Administration, the Office of Intelligence and Analysis, the Countering Weapons of Mass Destruction office, and the Cybersecurity and Infrastructure Security Agency. And the career federal workers who actually implement DHS policy are still there doing their jobs. They will be able to keep current policies going and respond to active emergencies.

But their jobs might get harder. “The career people can keep the trains running,” says Daniel. “The bigger issue is the long-term policy paralysis and the policy turmoil that this lack of permanence and long-term thinking will inevitably exact.”

More Great WIRED Stories

This content was originally published here.



Drones are Quickly Becoming a Cybersecurity Nightmare | Threatpost

Drones are a growing threat for law enforcement and business security officers. In the run-up to Christmas 2018, rogue drones grounded planes at London Gatwick, the UK’s second-busiest airport. But, increasingly it’s not just the air traffic controllers sounding the alarms over drones, it’s also the cybersecurity community.

Drones are already being used as one component of cyberattacks, Tony Reeves, a director at consulting and training company Level 7 Expertise, and a former officer in the UK’s Royal Air Force.

With drones costing from as little as $30 to $10,000 or more for specialist professional models, Reeves said, they can be used for any number of different style attacks.

Low cost and easy to use, drones can deliver a “payload” to carry out surveillance, to capture data, or to disrupt networks. Making matters worse, drones are hard to detect and defeat, he said at the recent CRESTCon ethical-hacking conference in London.

Reeves’ firm is unusual in combining cyberdefense work with expertise in intelligence gathering and unmanned aerial vehicles, and plans to use drones as part of an ethical penetration testing program.

“Drones are disruptive, not least because they bring a rapid reduction in the skills operators need,” he said. “You would crash an old-style remote control plane in 30 seconds, if you had no training. But kids can fly today’s drones.”

Law-enforcement agencies and aviation regulators are increasingly concerned about the risks posed by drones. They poses an unacceptable risk to jetliners. The heavy lithium-ion batteries in drones could puncture the skin of an aircraft wing, or smash the blades in an engine. Groups in Syria and Iraq have used modified remote control aircraft as flying bombs.

Cutting Holes in Geofences

Drones are Quickly Becoming a Cybersecurity Nightmare In the case of the Gatwick airport incident, UK authorities responded by deploying military antidrone defense systems. The details have not been made public, but essentially it extends the no-fly area around airfields. One way authorities inforce these no-fly drone zones is with geofences, a type of software-level device programing that restricts where a drone can fly.

Off-the-shelf drones are being fitted with geofencing software, so that owners cannot fly them over airports or other restricted areas. DJI, the market leader, has geofencing for airports, prisons and nuclear power plants. Parrot, the No. 2 manufacturer, also has geofencing in its ANAFI software, but pilots can turn it off.

Rogue operators could, of course, build a drone themselves without any geofencing hardware or software. Or they could turn to basic hacks.

“We are seeing some leakage of tactics and information from Islamic State operations in Syria, where they defeated geofencing by denying the drone a GPS signal by wrapping it in tinfoil, and flying manually,” explained Reeves.

“There is a Russian website – on the open internet, not the dark web – that offers hacks for all DJI products. This apparently removes geofencing, altitude and speed limitations. If the Russians can do it, then it’s a fair call to believe that a committed Western hacker could do the same.”

“Equipment is now available to hack drones so they can bypass technology controls,” warned James Dale, a cybersecurity expert at PA Consulting, a firm with both aviation and cyber practices.

“There are now regulatory controls, in some regions, to force drone operators to use geofencing systems,” Dale said. “Yet, there are examples of online vendors selling software and hardware modifications for drones, which are designed to disable these ‘No Fly Zones’ limitations.” The threat from these hacks will only grow as regulators make more use of geofencing-based no-fly zones. Large sporting events or protests are just two occasions where regulators already restrict drone flights. They are likely to automate the restrictions if they can.

Some drone owners will view this as a challenge. Reeves categorizes these users as “disruptive enthusiasts”: drone owners willing to break the rules to obtain a cool shot or video footage.  Other motives are more sinister and include crime, terrorism or nation-state actors such as intelligence services.

Spy in the Sky

Using drones is a low cost and simple way to gather information. Intelligence services can call on satellites and other high-end tools, but an off-the-shelf drone can capture video, photos and audio right out of the box. With a few modifications, a drone becomes an electronic surveillance tool, too.

Drone as Surveillance “There are plenty of reports to be found of individuals or organizations building or modifying drones to carry RF-based payloads including Wi-Fi tracking, capture and access capabilities – predominantly using Raspberry Pi and Wifi Pineapple devices, but also 2/3/4G network devices,” explained Reeves. Bluetooth sniffing is also possible.

Putting a Wi-Fi access point on top of a building, or inside its perimeter, could allow hackers to listen in to data traffic. Drone operators could also drop a sophisticated microphone into a restricted area for eavesdropping, if technicians can overcome issues of power, weight and range. “Our judgment is that this is more the province of a corporate espionage operative than the average hacker,” said Reeves.

Security teams need to develop new techniques to monitor drones, and to keep sensitive information safe. Good IT security practice, including scanning for unauthorized access points, will help. But organizations will also need to look at everything from keeping window blinds closed, to how to detect and disrupt drones.

“The main security risk from drones is still their ability to bypass traditional physical controls by breaching fences or accessing the top floor of an office,” said PA Consulting’s Dale.

Down to Earth

Unfortunately for the defenders, drones are hard to spot, and even harder to disrupt. Drone features that appeal to consumer and professional operators make them a difficult target: They are smaller and quiet, and designed to overcome radio frequency interference.

“Drones have low acoustic and thermal signatures, and low-power RF transmitters,” explained Reeves. “On a radar, they look like birds, and air traffic control radars are designed to ignore birds.”

Drones are also fast, and their transmission systems use a range of frequency-hopping techniques to maintain a good link to the controller. This makes the data link between the operator and drone user hard to detect, and even harder to disrupt.

For now, law-enforcement agencies and businesses are unable to take over and capture or land rogue drones. Jamming the signal is possible but illegal in much of the West, including the UK and the US, with a few exceptions for government and military agencies. (Palm Beach, Florida–based lawyer Jonathan Rupprecht has compiled a comprehensive study of US federal counterdrone lawU).

That leaves more forceful countermeasures.

Rheinmetall Defence’s anti-drone laser. Image courtesy Rheinmetall Defence

Both manufacturers and law enforcement agencies have experimented with techniques involving drones or guns deploying nets, and even birds of prey. At the other end of the spectrum, German company Rheinmetall Defence has developed antidrone lasers that can be mounted on a truck or an armored vehicle.

But lasers, jamming or even lower-tech measures such as using a sniper to bring down a drone raise other issues, especially over populated areas and airports.

For now, the best defense against drones – for law enforcement and corporate security teams – remains to find and deter rogue drone operators.

“Organizations should conduct threat-modeling exercises to identify and understand the potential threats. They should consider ‘what-if’ scenarios involving drones such as a rogue access point being dropped on the roof, or the CFO’s laptop screen being filmed through the window. They then need to work out how to protect themselves from these events and how to react,” said PA Consulting’s Dale.

“As with the internet and cybersecurity, the positive and negative use of drones are two sides of the same coin and as such, you can’t have one without the other,” added Reeves. “What is certain though, is that security planning will by necessity have to include the dimension of altitude. That will have far-reaching effects.”

This content was originally published here.



IBM brings artificial intelligence to the heart of cybersecurity strategies

IBM has launched IBM Security Connect, a new platform designed to bring vendors, developers, AI, and data together to improve cyber incident response and abilities.

On Monday, the New York-based technology company unveiled the open platform, which IBM says “is the first security cloud platform built on open technologies, with AI at its core, to analyze federated security data across previously unconnected tools and environments.”

An analysis conducted by IBM suggests that cybersecurity teams in the enterprise use, on average, over 80 cybersecurity solutions provided by roughly 40 vendors.

This is a potential recipe for chaos and may reduce the overall effectiveness of security and defense.

IBM Security Connect makes use of both cloud technology and AI. Users of the platform will be able to apply machine learning and AI, including Watson for Cyber Security, to cybersecurity products to increase their effectiveness.

At launch, over a dozen security vendors and business partners have signed up.

“IBM Security Connect will help tackle some of the biggest security challenges today via open standards, which can help pave the way toward collaborative innovation,” the tech giant says. “As it is built on open standards, it can help companies build unique microservices, develop new security applications, integrate existing security solutions, and leverage data from open shared services.”

Artificial intelligence, which includes neural networking, machine learning, analytics, and the use of algorithms to complete tasks, allows machines to learn from experience.

In cybersecurity, the machine learning subset of AI has the most use — at least at this stage in AI development. While there is little use of ‘true’ cognitive AI, machine learning can provide a springboard from traditional, signature-based antivirus and cybersecurity solutions to a more extensive means of protection through data collection and analysis.

When machine learning systems are given a large enough data pool to digest and analyze, this can be used to help shrink attack surfaces through predictive analytics, the detection of what is likely to be suspicious behavior, and this, in turn, eases the burden on cybersecurity staff who often have to triage cybersecurity-related events on a daily basis.

AI and machine learning are not perfect and cannot be considered a silver bullet for cybersecurity defense. However, solutions and platforms which leverage these technologies can give the enterprise an additional way to defend themselves against cyberattacks which are constantly evolving and increasing in sophistication.

IBM appears to have recognized this opportunity in the cybersecurity market. Alongside the firm’s IBM Security Connect, the firm’s Security Operations Centers (SOCs) and Watson for Cyber Security are key elements of IBM’s move into the AI for cybersecurity market.

The firm’s SOCs are found in countries including the US, India, Japan, and Poland. The SOCs act as X-Force training hubs which offer training and cyberattack simulations, of which virtual environments are used to interact with real-life scenarios.

The centers process over one trillion security events every month to generate threat intelligence.

Big Blue’s Watson was integrated into a security offering last year. The supercomputer, which combines AI and data analytics, acts as a knowledge repository for cybersecurity professionals using IBM’s Cognitive Security Operations Center platform.

These services are not reserved purely for the enterprise; IBM also caters for government and federal agencies.

The ongoing effort to develop AI solutions for modern businesses is further achieved with the launch of IBM AI OpenScale, an enterprise platform for the creation and management of artificial intelligence applications.

In addition to IBM Security Connect, the company also announced a new addition to its Security Operations Center, a mobile unit called the IBM X-Force Command Cyber Tactical Operations Center (C-TOC).

The mobile unit will travel to companies in the US and Europe and offer training on incident response, defense strategies, and crisis leadership.

IBM has been pushing for the integration and further development of AI solutions in the enterprise and by taking up a vendor-agnostic stance in the AI realm especially when the need for cybersecurity solutions is great, the company is setting itself up as one of the major AI-security players not only in the present but potentially the future.

Previous and related coverage

This content was originally published here.



Vectra raises $36M for its AI-based approach to cybersecurity intrusion detection

With the trend of growing cybercrime showing no indication of abating, a startup called Vectra that has built an artificial intelligence-based system called Cognito to detect cyberattacks and mobilise security systems to respond to them has raised $36 million to expand its R&D and business development.

This Series D comes on the back of a strong year for the startup, with 181 percent growth in customer subscriptions between 2016 and 2017, and Vectra’s CEO Hitesh Sheth said he expects the same this year. Typical customers are large enterprises (which is why you don’t see much about pricing on the site) and includes players in the financial, healthcare, government, tech and education sectors. The list the company disclosed to me includes LiveNation/Ticketmaster, Pinterest, Kronos, Tribune Media, Verifone, Agilent, Texas A&M University and DZ Bank in Germany.

This latest round is being led by Atlantic Bridge Capital, with participation from Ireland’s Strategic Investment Fund (ISIF) and Nissho Electronics Corp. Previous investors Khosla Ventures, Accel Partners, IA Ventures, AME Cloud Ventures, DAG Ventures and Wipro Ventures also participated. The company’s total raised to date is $123 million, and while it is not disclosing its valuation, its pre-money valuation of just under $344 million, according to PitchBook, based on its last funding round in March 2016, is likely getting a big boost after the growth it has seen. Also for context, one of its closer competitors, Darktrace, was last valued at $825 million.

Vectra’s growth — and the round that it has raised — underscores one of the bigger challenges in the market at the moment for enterprises and other organizations.

While there are a number of solutions out there for trying to block malicious hackers and their various techniques, and there are systems in place for stopping them when they are found, there is a gap in the market for the moments where cyber criminals evade the best blocks and then proceed to steal data, sometimes for months or more.

The Winter Olympics in Korea, as one recent example, suffered an attack that was only detected after the malicious hackers had already been sucking up data for 120 days.

“One of the issues for enterprises today is that it’s never been more hostile. The operating assumption is that you will get breached,” said Hitesh Sheth, president and CEO of Vectra. His company’s solution, he says, is not to try to change that currently immutable fact, but to drastically shrink the length of an otherwise months-long attack to minutes and hours.  “The only control you really have is what will you do once you are breached.”

Vectra does this using AI. The thinking here is that, if you are working with large enterprises, there are many places, services, apps and end points that need to be assessed for inconsistencies in how they are being queried and used in the network. Systems that are automated and use machine learning to essentially mimic the behavior of security specialists are the best at doing this kind of searching and identification.

Sheth claims that while there are a number of other intrusion and threat detection services out in the market — Darktrace, Cisco’s intrusion detection (built around a number of acquisitions) and RiskIQ being some of them — Vectra is the only one of these that is built on AI algorithms from the ground up. “AI is a bolt-on for most security players, but this is all we do.”

He also says that the other aspect of its service that helps it stand out is its focus on network, rather than end-point, traffic. “If devices are compromised, end point logs are compromised.”

Sheth describes this latest round as its “path to profitability,” where it could be the last one Vectra needs before it tips into the black itself — a big feat for an SaaS service that also has its sights on an IPO longer-term.

“What is a fad in the valley is to raise as much as possible and then some more,” he said. “Investors can win but I’m not sure employees do. You want to rase as much as possible but you need to see how to scale.” He said initially the company wanted to raise between $25 million and $30 million but “interest was super high and it was oversubscribed, so we accommodated investors that we thought would add value.”

The connection with the Irish strategic investment stems out of the fact that Vectra is going to build an R&D center in Dublin. This came first and the investment came second, Sheth said.

The company selected Dublin because it had considered London and Barcelona — there are already three centers in the US, in Austin, Cambridge San Jose — but backed away from the former because of uncertainties around Brexit, and the latter because of political upheaval. Ireland, he believes, will only grow in prominence for its position as the only English-speaking market still fully in the European Union.

“This is an exciting investment for ISIF, which promises significant economic impact for Ireland,” said Fergal McAleavey, head of private equity at ISIF, in a statement. “It is encouraging to see Ireland leverage its emerging expertise in artificial intelligence by attracting businesses such as Vectra that are on the leading edge of technology. With cybersecurity becoming such a critical issue for all organizations, we are confident that Vectra will deliver a strong economic return on our investment while creating high-value R&D employment here in Ireland.”

Meanwhile, company’s growth is what swayed the lead investor.

“We have been impressed by the remarkable growth of Vectra in this fast-moving cybersecurity market,” said Kevin Dillon, managing partner at Atlantic Bridge Capital, in a statement. “The increasing volume, creativity and effectiveness of cyberattacks means that enterprises must adopt AI to automate cybersecurity operations. We look forward to helping the company expand its global enterprise footprint.”

Featured Image: Getty Images

This content was originally published here.




JASK and the future of autonomous cybersecurity – TechCrunch

There is a familiar trope in Hollywood cyberwarfare movies. A lone whiz kid hacker (often with blue, pink, or platinum hair) fights an evil government. Despite combatting dozens of cyber defenders, each of whom appears to be working around the clock and has very little need to use the facilities, the hacker is able to defeat all security and gain access to the secret weapon plans or whatever have you. The weapon stopped, the hacker becomes a hero.

The real world of security operations centers (SOCs) couldn’t be further from this silver screen fiction. Today’s hackers (who are the bad guys, by the way) don’t have the time to custom hack a system and play cat-and-mouse with security professionals. Instead, they increasingly build a toolbox of automated scripts and simultaneously hit hundreds of targets using, say, a newly discovered zero-day vulnerability and trying to take advantage of it as much as possible before it is patched.

Security analysts working in a SOC are increasingly overburdened and overwhelmed by the sheer number of attacks they have to process. Yet, despite the promises of automation, they are often still using manual processes to counter these attacks. Fighting automated attacks with manual actions is like fighting mechanized armor with horses: futile.

Nonetheless, that’s the current state of things in the security operations world, but as V.Jay LaRosa, the VP of Global Security Architecture of payroll and HR company ADP explained to me, “The industry, in general from a SOC operations perspective, it is about to go through a massive revolution.”

That revolution is automation. Many companies have claimed that they are bringing machine learning and artificial intelligence to security operations, and the buzzword has been a mainstay of security startup pitch decks for some times. Results in many cases have been nothing short of lackluster at best. But a new generation of startups is now replacing soaring claims with hard science, and focusing on the time-consuming low-hanging fruit of the security analyst’s work.

One of those companies, as we will learn shortly, is JASK. The company, which is based in San Francisco and Austin, wants to create a new market for what it calls the “autonomous security operations center.” Our goal is to understand the current terrain for SOCs, and how such a platform might fit into the future of cybersecurity.

Data wrangling and the challenge of automating security

The security operations center is the central nervous system of corporate security departments today. Borrowing concepts from military organizational design, the modern SOC is designed to fuse streams of data into one place, giving security analysts a comprehensive overview of a company’s systems. Those data sources typically include network logs, an incident detection and response system, web application firewall data, internal reports, antivirus, and many more. Large companies can easily have dozens of data sources.

Once all of that information has been ingested, it is up to a team of security analysts to evaluate that data and start to “connect the dots.” These professionals are often overworked since the growth of the security team is generally reactive to the threat environment. Startups might start with a single security professional, and slowly expand that team as new threats to the business are discovered.

Given the scale and complexity of the data, investigating a single security alert can take significant time. An analyst might spend 50 minutes just pulling and cleaning the necessary data to be able to evaluate the likelihood of a threat to the company. Worse, alerts are sufficiently variable that the analyst often has to repeatedly perform this cleanup work for every alert.

Data wrangling is one of the most fundamental problems that every SOC faces. All of those streams of data need to be constantly managed to ensure that they are processed properly. As LaRosa from ADP explained, “The biggest challenge we deal with in this space is that [data] is transformed at the time of collection, and when it is transformed, you lose the raw information.” The challenge then is that “If you don’t transform that data properly, then … all that information becomes garbage.”

The challenges of data wrangling aren’t unique to security — teams across the enterprise struggle to design automated solutions. Nonetheless, just getting the right data to the right person is an incredible challenge. Many security teams still manually monitor data streams, and may even write their own ad-hoc batch processing scripts to get data ready for analysis.

Managing that data inside the SOC is the job of a security information and event management system (SIEM), which acts as a system of record for the activities and data flowing through security operations. Originally focused on compliance, these systems allow analysts to access the data they need, and also log the outcome of any alert investigation. Products like ArcSight and Splunk and many others here have owned this space for years, and the market is not going anywhere.

Due to their compliance focus though, security management systems often lack the kinds of automated features that would make analysts more efficient. One early response to this challenge was a market known as user entity behavior analytics (UEBA). These products, which include companies like Exabeam, analyze typical user behavior and search for anomalies. In this way, they are meant to integrate raw data together to highlight activities for security analysts, saving them time and attention. This market was originally standalone, but as Gartner has pointed out, these analytics products are increasingly migrating into the security information management space itself as a sort of “smarter SIEM.”

These analytics products added value, but they didn’t solve the comprehensive challenge of data wrangling. Ideally, a system would ingest all of the security data and start to automatically detect correlations, grouping disparate data together into a cohesive security alert that could be rapidly evaluated by a security analyst. This sort of autonomous security has been a dream of security analysts for years, but that dream increasingly looks like it could become reality quite soon.

LaRosa of ADP told me that “Organizationally, we have got to figure out how we help our humans to work smarter.” David Tsao, Global Information Security Officer of Veeva Systems, was more specific, asking “So how do you organize data in a way so that a security engineer … can see how these various events make sense?”

JASK and the future of “autonomous security”

That’s where a company like JASK comes in. Its goal, simply put, is to take all the disparate data streams entering the security operations center and automatically group them into attacks. From there, analysts can then evaluate each threat holistically, saving them time and allowing them to focus on the sophisticated analytical part of their work, instead of on monotonous data wrangling.

The startup was founded by Greg Martin, a security veteran who previously founded threat intelligence platform ThreatStream (now branded Anomali). Before that, he worked as an executive at ArcSight, a company that is one of the incumbent behemoths in security information management.

Martin explained to me that “we are now far and away past what we can do with just human-led SOCs.” The challenge is that every single security alert coming in has to go through manual review. “I really feel like the state of the art in security operations is really how we manufactured cars in the 1950s — hand-painting every car,” Martin said. “JASK was founded to just clean up the mess.”

Machine learning is one of these abused terms in the startup world, and certainly that is no exception in cybersecurity. Visionary security professionals wax poetic about automated systems that instantly detect a hacker as they attempt to gain access to the system and immediately respond with tested actions designed to thwart them. The reality is much less exciting: just connecting data from disparate sources is a major hurdle for AI researchers in the security space.

Martin’s philosophy with JASK is that the industry should walk before it runs. “We actually look to the autonomous car industry,” he said to me. “They broke the development roadmap into phases.” For JASK, “Phase one would be to collect all the data and prepare and identify it for machine learning,” he said. LaRosa of ADP, talking about the potential of this sort of automation, said that “you are taking forty to fifty minutes of busy work out of that process and allow [the security analysts] to get right to the root cause.”

This doesn’t mean that security analysts are suddenly out of a job, indeed far from it. Analysts still have to interpret the information that has been compiled, and even more importantly, they have to decide on what is the best course of action. Today’s companies are moving from “runbooks” of static response procedures to automated security orchestration systems. Machine learning realistically is far from being able to accomplish the full lifecycle of an alert today, although Martin is hopeful that such automation is coming in later phases of the roadmap.

Martin tells me that the technology is being used by twenty customers today. The company’s stack is built on technologies like Hadoop, allowing it to process significantly higher volumes of data compared to legacy security products.

JASK is essentially carving out a unique niche in the security market today, and the company is currently in beta. The company raised a $2m seed from Battery in early 2016, and a $12m series A led by Dell Technologies Capital, which saw its investment in security startup Zscaler IPO last week.

There are thousands of security products in the market, as any visit to the RSA conference will quickly convince you. Unfortunately though, SOCs can’t just be built with tech off the shelf. Every company has unique systems, processes, and threat concerns that security operations need to adapt to, and of course, hackers are not standing still. Products need to constantly change to adapt to those needs, which is why machine learning and its flexibility is so important.

Martin said that “we have to bias our algorithms so that you never trust any one individual or any one team. It is a careful controlled dance to build these types of systems to produce general purpose, general results that applies across organizations.” The nuance around artificial intelligence is refreshing in a space that can see incredible hype. Now the hard part is to keep moving that roadmap forward. Maybe that blue-haired silver screen hacker needs some employment.

This content was originally published here.



Georgia Hacking Bill SB315 Gets Cybersecurity All Wrong

In March, the Georgia State General Assembly passed a bill that would make it illegal to access a computer or network “without authority.” Georgia Governor Nathan Deal has until Tuesday to decide whether to sign it into law or veto it. The 40-day limbo has morphed from a bureaucratic formality, though, into a heated debate with national implications. In just 43 lines, the bill raises fundamental questions about how to establish boundaries in cyberspace without hindering vital security research and, crucially, the ethics of “hacking back,” in which institutions that have been attacked can digitally pursue the hackers and even potentially retaliate.

Georgia Senate Bill 315 emerged in part out of an embarrassing and troubling incident in which a massive trove of sensitive election and voter data sat exposed for months in Georgia’s unified election center at Kennesaw State University. Frustrated that it wasn’t illegal for people to access the data when it was accidentally publicly available, lawmakers set out to limit the legality of unauthorized computer access. But critics say that the resulting legislation as written is too vague, and threatens to outlaw certain types of digital forensic research while exempting—and therefore potentially condoning—dangerous “cybersecurity active defense measures.”

“I don’t think this legislation actually solves a problem,” says Jake Williams, founder of the Georgia-based security firm Rendition Infosec. “Information put in a publicly accessible location can and will be downloaded by unintended parties. Making that illegal brings into question so many other issues, like what is ‘authorized’ use? Is violating terms of service illegal?”

‘I don’t think this legislation actually solves a problem.’

Jake Williams, Rendition Infosec

Hackers calling themselves SB315, meanwhile, have apparently launched attacks against a church, the City of Augusta, two restaurants, and Georgia Southern University in protest. The group claimed in a message on Calvary Baptist Church of Augusta’s website, according to the Augusta Chronicle, that they couldn’t report the vulnerability they exploited to infiltrate the site, because the legislation would make it illegal. In their various hacks, the group leaked what it claimed was compromised login credentials and other personal information, but the data from the City of Augusta and Georgia Southern University could also have been cobbled together from publicly accessible records.

“Protests resorting to hacking and threats of retaliation will do nothing but scare these particular legislators further and strengthen their resolve for the need for this sort of bill,” says Williams.

Beyond the stunt hacks, prominent digital rights organizations and even large tech firms have taken a hard stand against the bill. The Electronic Frontier Foundation said in April that the law would, “severely chill independent researchers’ ability to shine light on computer vulnerabilities,” describing it as “misguided.” Security researchers often find flaws and weaknesses in organizations’ networks incidentally, or through proactive probing. The Georgia bill would likely make this type of work illegal, because it would be considered “unauthorized computer access.” It would discourage people who find problems in digital systems from disclosing them so they could be fixed—a situation that hurts everyone by reducing collective security.

The proposed legislation in Georgia is far from the first time this tension has surfaced. The federal Computer Fraud and Abuse Act, which has similar provisions about computer and network access, has caused controversy for decades.

The stakes are higher than ever to agree on a path forward, though, as cyberaggression ramps up domestically and around the world. “Georgia codifying this concept in its criminal code is potentially a grave step that has some known and many unknown ramifications,” representatives of Google and Microsoft wrote in a joint letter to Governor Deal in April urging him to veto the legislation. “Network operators should indeed have the right and permission to defend themselves from attack, but … provisions such as this could easily lead to abuse and be deployed for anticompetitive, not protective purposes.”

The stakes are high as cyberaggression ramps up domestically and around the world.

One of the primary issues raised by “hacking back” is the simple question of whether victims can accurately identify their aggressors, trace the correct source, and retaliate against the right entity. Attribution is notoriously challenging in digital forensics, and traffic or commands that appear to originate from one source may actually have come from elsewhere. Additionally, attackers often hide behind third-party computers that they have compromised with malware to do their bidding. In the Wild West of hacking back, victims could easily end up doubling down on bystander devices that are already the target of malware campaigns.

Georgia’s not alone in exploring hacking back; Congress has considered it as well. Reacting to numerous digital threats the United States currently faces, particularly from Russian hackers, representative Tom Graves of Georgia and Kyrsten Sinema of Arizona introduced a federal bill in the fall, the Active Cyber Defense Certainty Act, that would give hacking victims leeway to penetrate attackers’ networks. But while security experts have long-warned about that dangers and potential escalation involved in allowing unchecked retaliation, the idea of turning it into a state-by-state issue is even more unwieldy and murky.

With only a few days left before the deadline for a decision, Jen Talaber Ryan, deputy chief of staff for communications in Governor Deal’s office, told WIRED that, “the governor is carefully reviewing the bill, including the input received from stakeholders on all sides.” But regardless of the outcome, the uproar over the Georgia bill reflects broader uncertainty and fear over how to handle digital threats. And the concept of hacking back is stubbornly appealing when lawmakers at all levels of government struggle to feel in control of an opaque problem.

Hack Attacks

This content was originally published here.



The Bleak State of Federal Government Cybersecurity

It’s a truism by now that the federal government struggles with cybersecurity, but a report recent report by the White House’s Office of Management and Budget reinforces the dire need for change across dozens of agencies. Of the 96 federal agencies it assessed, it deemed 74 percent either “At Risk” or “High Risk,” meaning that they need crucial and immediate improvements.

While the OMB findings shouldn’t come as a complete shock, given previous bleak assessments—not to mention devastating government data breaches—the stats are jarring nonetheless. Not only are so many agencies vulnerable, but over half lack even the ability to determine what software runs on their systems. And only one in four agencies could confirm that they have the capability to detect and investigate signs of a data breach, meaning that the vast majority are essentially flying blind. “Federal agencies do not have the visibility into their networks to effectively detect data exfiltration attempts and respond to cybersecurity incidents,” the report states bluntly.

Perhaps most troubling of all: In 38 percent of government cybersecurity incidents, the relevant agency never identifies the “attack vector,” meaning it never learns how a hacker perpetrated an attack. “That’s definitely problematic,” says Chris Wysopal, CTO of the software auditing firm Veracode. “The whole key of incident response is understanding what happened. If you can’t plug the hole the attacker is just going to come back in again.”

Producing the “Risk Determination Report and Action Plan” was a requirement of the Trump administration’s May cybersecurity Executive Order, and while passing the EO was a positive step in terms of prioritizing digital defense, progress overall has been mixed. The report also comes at a time when the White House has been sending conflicting messages about its focus on cybersecurity—last month the Trump administration eliminated its top two cybersecurity policy and management leadership roles including one that specifically oversaw federal government cybersecurity.

‘If you can’t plug the hole the attacker is just going to come back in again.’

In a letter on Wednesday, a group of 12 Democratic senators asked national security adviser John Bolton to reconsider cutting the positions. “The Cybersecurity Coordinator historically has worked with agencies to develop a harmonized strategy,” the senators wrote. “While we recognize the importance of streamlining positions, we are concerned the decision to eliminate this role will lead to a lack of unified focus against cyber threats.”

Security analysts worry that without that specific oversight, discussion about current deficiencies and recommendations for fixing them will go nowhere.

“My initial gut feeling about the report was ‘oh good they’re paying attention and starting to address these issues,'” says Alex Heid, chief research officer at the risk management firm SecurityScorecard, which tracks cybersecurity preparedness across the government and other sectors. “But the findings really highlight the blind spots. There’s still a long way to go, because it’s such a massive problem and there has not been any real accountability.”

Creating that accountability is one of the report’s four recommendations, along with increasing awareness, implementing existing government guidelines and frameworks, and consolidating and standardizing defense to use resources more efficiently. Some argue, though, that the document is too vague about both the problems and the fixes. For example, it doesn’t name the agencies it surveyed or where they fall in the assessment. As a result, it’s difficult to tell whether the agencies at risk are relatively benign, or huge institutions that manage an array of deeply sensitive data. Similarly, the report gives aggregate information about security incidents, but doesn’t offer any granularity for minor blips versus major catastrophes.

“The government CISOs and CIOs I’ve talked to know what their issues are and they’re on a path of fixing what they can with what they’ve got and asking for more budget,” says Michael Chung, head of government solutions at the bug bounty facilitator Bugcrowd, who recently left the Pentagon’s Defense Digital Services. “But with the top cyber positions gone there is a gap in leadership, so I take this report with a grain of salt.”

Safety concerns likely limit exactly how much OMB can disclose, but after years of increased awareness about the shortcomings of federal cybersecurity defenses, analysts worry that the report is simply perfunctory. “One thing they seem to have kind of punted on is the whole legacy tech modernization issue,” Veracode’s Wysopal notes. “And to me that’s probably the biggest and most important issue. Agencies are using five different versions of Windows going back 10 years, running multiple versions of things like Java and Flash, and their email is a huge mess. You’re never going to be able to hire enough personnel to manage all that risk without simplifying and standardizing.”

The OMB says that the report represents a plan for implementing defense improvements and reducing risk over the next 12 months, but it’s unclear how such generalized recommendations translate to tailored one-year programs across dozens of organizations. And even if it did, the report itself notes the barriers to effecting positive change. “The assessments show that CIOs and CISOs often lack the authority necessary to make organization-wide decisions,” it notes, calling the finding “concerning.” Without leadership at the very top of each organization and from the White House, some observers doubt that it will actually be possible to make big changes in the near future.

This content was originally published here.