Speaking in a keynote talk at the RSA Conference in San Francisco, Mary T. Barra, chairman and CEO of General Motors Company, said that she acknowledged that “no one in this room needs convincing that there are virtually no industries today that are not vulnerable to cyber-attacks.”
She said that the auto industry is no exception, as it is bringing technologies and features to market, while users expect seamless integration with their devices, “and it is always our intention that customers and their data are always safe, secure, and private.”
GM built a “proactive cybersecurity organization” with hands-on engagement from the board, as it views cybersecurity “not just as a competitive advantage, but as a systemic concern for our industry.”
Barra said that the automotive industry remains competitive, but is an area “where we must, and rarely do, collaborate and share best practices,” and it works with the Auto-ISAC for information sharing, while it is focused on securing the automobile process at every stage.
Referring to the Cruise autonomous vehicle arm of GM, she said that human error is responsible for 9 out of 10 crashes, and GM was keen to provide “the safest products and strongest cybersecurity and giving them greater convenience, better accessibility, at an affordable cost.”
Barra said that around $100m is spent per year on cybersecurity, and the risk is looked at end-to-end with “no shortcuts” taken by the nearly 500 practitioners “developing in-depth defense, monitoring incident response capabilities that we continually test, rework, and refine.”
One partner GM has worked with is HackerOne “to engage more closely with the research community and identify vulnerabilities before they become an issue.” She said that this commitment showed its determination to maintain best practices in cybersecurity, and had re-engineered its development program to create the Vehicle Intelligent Program (VIP) to support safety systems, 5G networks, and over-the-air updates “and enhanced cybersecurity protections.”
She concluded by saying that “we know this is a marathon with no finish line” and stressed the need for more talent.
Criminals are using a combination of server exploitation, email, and voice calls to execute voice phishing attacks, often referred to as vishing.
In a session at the RSA Conference in San Francisco, John LaCour, founder and CTO at PhishLabs, and Davey Ware, Special Agent at the FBI, detailed the mechanics of how vishing attacks work to defraud victims of money, as well as how one group of criminals was found.
"Vishing attacks are phishing attacks that use the telephone network," LaCour said.
He explained that in vishing attacks the lure is delivered in one of several ways, including an email message with a call-back number, SMS via a telephone provider, and robocalls from an interactive voice response system (IVR). According to data cited by LaCour, over a one-year period more than 50% of vishing attacks targeted small banks and credit unions.
Vishing attacks occur in stages involving compromising a Windows server with some form of Remote Desktop Protocol (RDP) backdoor to gain access. Attackers also compromise IVR systems and then create fake email accounts as well.
The FBI Investigation
The FBI is aware of vishing attacks and has been actively involved in tracking down criminals. Ware detailed one such investigation involving three vishing hackers from Romania who had exploited a small bank in South Carolina.
By going through the logs of the impacted bank the FBI identified a number of clues, including IP addresses from RDP sessions. With some basic internet searching, Ware said, the FBI was able to make a link to a Facebook account and then via legal processes was able to get additional information on the criminals.
The FBI then found further evidence in Facebook chats that tied three Romanian individuals to the vishing attack. Over a two-year period, Ware said, the FBI collected enough evidence that they felt they could go to the next step, connecting with law enforcement in Romania.
Arresting the Vishers
Romanian law enforcement, working with the FBI, raided the homes of all three suspects at the same time in 2014. Ware noted that one of the criminals threw his laptop and power cord out the window as soon as police showed up. Luckily, the laptop landed in the snow and the data were still all on the laptop.
At the time of the raid, Ware noted, there was an active RDP session open on the laptop, with a text file including credit card numbers.
"They were literally doing the scheme when the search warrant was served," he said.
While the raid were conducted in 2014, the legal process takes time. All three of the suspects were indicted in 2017, extradited to the US in 2018, and, after pleading guilty, sentenced in 2019 to jail terms of approximately 8 years.
"Why we're talking about this case now is because it has been fully adjudicated, so we can talk about it," Ware said. "We want to present this because attackers are still using the same tactics now."
In a talk at the RSA Conference in San Francisco, Lexis Nexis Risk Solutions director of product management Daniel Ayoub and VP of product management Dean Weinert talked about the reality of which metrics and identifiers browsers release on users.
In a talk titled “Creepy Leaky Browsers,” Ayoub said that the classic cartoon “on the internet no one knows you’re a dog” was becoming less apparent, as there is so much more info available via a browser. The concept of a browser fingerprint involves a combination of persistent and non-persistent identifiers gathered passively through application programming interfaces (APIs) built into modern web browsers.
Ayoub said these browser fingerprints are typically used for:
- Digital marketing
- Improving the user experience
- Return device recognition
- Fraud prevention
Weinert said that this all “began with cookies” but browsers went steps forward when cookie use was limited, so identifiers could be determined on a user’s network information, external IP address, screen resolution, and the type of GRU. Ayoub said that many introductions were made in the late 2000s before concerns were raised regarding browser privacy in 2010 by the EFF.
“As time moved on, we saw more APIs added to browsers, and they offered details on what hardware was added, how much RAM was used, and which CPUs were now baked into the browser,” he said. This allows someone to know how a user interacts with a device, and “the key point is that real work apps that benefit consumers take into account fingerprinting, and these are used every day in the background, and most people are unaware of it.”
Their research into different browsers showed that there were different details revealed; for example, Firefox doesn’t reveal the device memory, while Google Chrome OSX does, and some browsers support Bluetooth adapters, while some do not.
To better protect yourself while using the internet, Ayoub and Weinert recommended trying to “blend in” rather than stand out, “as more people don’t try to hide, and the best strategy is to use common operating systems and browsers.”
However, this causes an issue when trying to spot cyber-criminals, as Weinert said that the “bad guys look like regular users,” and as more browsers obfuscate, “if everything is vanilla it is harder to find the wolf among the sheep.”
Weinert said that browser vendors realized that they had to put privacy first, and he urged vendors to collaborate better to a degree where standards can be determined. “Also do the right thing” when device profiles are offered in bulk resale.
For users, Ayoub recommended using current and latest versions of browsers, going to fingerprinting sites to see what they are comfortable with, and considering using browser tools that are designed for privacy.
“Also opt-out where appropriate,” he said, and recommended finding your Advertiser ID on your device and switching it off or resetting it.
Last year's data breach at the Desjardins Group will cost the co-operative far more than initially anticipated.
Original estimates by the Quebec-based financial institution set the cost of recovering from the breach at $70m. The co-operative has now said that the final breach bill is likely to be $108m.
The data breach was intentionally carried out by a malicious employee who had access to banking details such as loans and savings. As a result of their actions, the data of 4.2 million customers who bank with Desjardins in Quebec and Ontario was exposed.
Six months after the breach was announced, the incident was found to have also affected 1.8 million credit card holders who were not Desjardins members. The employee at the center of the breach has since been fired.
News of the breach came to light in June last year. From July onward, Desjardins introduced identity protection for all members who bank with the co-operative in Quebec and Ontario, free of charge.
In November, Desjardins issued an online statement that implied that data exposed in the breach had not been misused.
The statement said: "Desjardins would like to remind its members that there was no spike in fraud cases, either before or after the privacy breach was announced on June 20."
While the repair bill does not make suitable reading material for the faint-hearted, Desjardins president and chief executive officer Guy Cormier said that the financial impact of the breach represents less than 1% of the $18bn in revenue the institution earned in 2019.
According to Cormier, Desjardins has "ample capacity" to absorb the cost of the breach into its everyday operations.
Driving up the cost of recovery is the package of compensation measures Desjardins offered its members in the wake of the breach. Included in the package was five years of free credit monitoring from Equifax, which suffered its own catastrophic data breach in 2017 in which personal data of almost half the population of the United States of America was exposed.
Cormier said that no further increase in costs related to the data breach is expected.
Researchers at the University of Texas have found a way to bamboozle malicious hackers into giving away their secrets.
The DEEP-Dig (DEcEPtion DIGging) method tricks hackers onto a decoy site set up to record whatever sneaky tactics are thrown at it. This information is then fed into a computer, where it is analyzed to produce clues on how to identify and fend off future hacking attacks.
University of Texas at Dallas computer scientists presented papers on their wily new work at the annual Computer Security Applications Conference in December in Puerto Rico and at the Hawaii International Conference of System Sciences.
Furtively obtaining information from hackers that can later be used against them is a rapidly growing cybersecurity field known as deception technology. This cunning approach encourages those working in cybersecurity to view cyber-attacks in a whole new light.
“There are criminals trying to attack our networks all the time, and normally we view that as a negative thing,” said Dr Kevin Hamlen, Eugene McDermott Professor of Computer Science.
“Instead of blocking them, maybe what we could be doing is viewing these attackers as a source of free labor. They’re providing us data about what malicious attacks look like. It’s a free source of highly prized data.”
Privacy restrictions can make it difficult for researchers to obtain sufficient data on attackers' tactics to create effective defense strategies. DEEP-Dig functions like a spy in the attacking camp, gathering up valuable real-time information on how hackers strike.
Dr. Gbadebo Ayoade, who presented the scientists' findings in Puerto Rico and Hawaii, said that having more data will make it easier to detect when an attack is under way.
“We’re using the data from hackers to train the machine to identify an attack,” said Ayoade. “We’re using deception to get better data.”
Dr Latifur Khan, professor of computer science at UT Dallas, said "attackers will feel they're successful" when they encounter the decoy site stocked with disinformation.
Mirroring the cyber-criminal’s domain-spoofing technique and using it against them to gain a window into their activity might appear like poetic justice; to Khan, it's simply another roll of the dice.
Describing the ongoing online battle between the lawless and the law-abiding, Khan said: "It's an endless game."
Traditional organized crime gangs are now making efforts to succeed in financial cybercrime in Latin America.
According to research by IntSights into cybercrime activities in central and south America, persistent cyber-criminals are operating extensive schemes targeting banks, hospitality services,and retail businesses for their credentials and financial assets.
As the attackers were deliberately changing their tactics and infrastructure but tended to use the same profiles, the IntSights research team were able to detect locations. This included one attacker who was based in Colombia, who was originally from Venezuela and had escaped from poverty and government censorship to pursue cybercrime as a career.
As well as dealing with economic struggles, political corruption, internet censorship, and the rise of organized crime, cybercrime has emerged in Latin America as attackers are specifically focused on financial gain.
Speaking to Infosecurity at RSA Conference in San Francisco, IntSights’ cyber-threat intelligence advisor Charity Wright said that the intelligence team were initially tipped off by the appearance of multiple phishing sites “but what we found was that it was a single person and he was building a team.” He turned out to be called Charles or Carlos, who was the attacker originally from Venezuela, and “he found a way to make money by scamming people out of their credentials for their bank accounts.”
The research found that he was using fraudulent sponsored adverts on search engines and social media to lure people into giving up their details. “He mostly evangelises his tactics and techniques to other people in Latin America,” Wright said. “He teaches other people about what he does, and also targets American banks.”
Wright said that there are four major threat landscape factors that are contributing to the cybercrime emergence in Latin America:
- Economic instability
- Social factors like poverty
- Corruption and bribery
- The population growth, and use of technology
This all adds to a combination of a need to make money, combined with a “new” user base of technology, and governments and law enforcement who are either overlooking this issue because of dealing with larger crimes, or turning a blind eye to smaller fraudulent crimes. “They are making millions of dollars now,” Wright added.
She also said that a lack of legislation is another factor, as while Brazil leads the way with over 40 different data privacy regulations in place, it is currently consolidating these into one overarching policy called Lei Geral de Proteção de Dados (LGPD), forecasted to be implemented in August 2020.
This law will be similar to GDPR and will focus on keeping companies accountable for their customers’ data, with non-compliance potentially resulting in a 2% annual revenue penalty, which Wright said would be crippling for retailers and banks that are already struggling to fight fraud and cybercrime.
“So all of the other factors considered, none of the enterprises are being held accountable for the protection of data of their users and employees,” she said. “There is a lot of skepticism, but I am advising businesses in the region to stay ahead of this because if they do not understand what is expected of them and how to plan for it and do it, they are going to face fines. They cannot afford to be non-compliant.”
In terms of cyber-criminal actions that verge on state-sponsored attacks and intelligence gathering, Wright said that there are some hacktivist-style groups, but these are not as prevalent as the low-level threat actors with some technical skill. “Those with technical skill are being recruited into cartels and organized crime groups, the rest of them are just really good at fraud.”
America's Democratic National Committee has warned its electoral candidates to be wary after a phony Bernie Sanders campaign staffer used a fake domain to contact other political campaigns.
The cyber-imposter attempted to set up conversations with at least two other campaigns using a spoofed domain registered outside the United States. Sanders campaign spokesperson Mike Casca said yesterday that he believed the domain to be registered in Russia.
Casca said that the detection of the imposter was the indication that the party's cybersecurity was working well.
“It’s clear the efforts and investments made by the DNC and all the campaigns to shore up our cybersecurity systems are working,” Casca told the Associated Press. “We will remain vigilant and continue to learn from each incident.”
DNC chief security officer Bob Lord emailed the party's presidential campaigns yesterday, urging them to be on the lookout for charlatans. Lord said that “adversaries will often try to impersonate real people on a campaign” to get people to “download suspicious files or click on a link to a phishing site.”
Campaigns were also instructed to question the plausibility of anyone attempting to arrange a call or meeting that could be recorded or published.
Though authorities have been notified about the fraudulent Sanders staffer, Lord expressed little hope that the impersonator would be identified, noting that "attribution is notoriously hard."
In an effort to sort the real domains from the fake, Lord wrote in his email to campaigns: "If you are using an alternate domain, please refrain from doing so and let us know if you are operating from a domain that others have not corresponded with before."
The CSO then instructed campaign staffers not to use their personal email accounts for official business.
If Lord's message sounds a trifle paranoid, it's worth remembering that a phishing attack on John Podesta, chairman of Hillary Clinton's 2016 presidential campaign, resulted in thousands of emails being hacked and leaked.
Podesta was deceived by an official-looking email sent to his Gmail account. Purporting to be from Google, the message warned Podesta that someone in Ukraine had accessed his personal Gmail password and had tried to log into his account. The email implored Podesta to immediately change his password, directing him to a malicious website to achieve this.
A former Microsoft engineer faces 20 years behind bars after being found guilty of attempting to defraud his ex-employer of $10m.
Ukrainian citizen Volodymyr Kvashuk, 25, from Renton, Washington, was initially a contractor for the tech giant before going full time there from August 2016 until he was fired in June 2018.
He was convicted on Tuesday of 18 federal felonies: five counts of wire fraud, six counts of money laundering, two counts of aggravated identity theft, two counts of filing false tax returns and one count each of mail fraud, access device fraud and access to a protected computer in furtherance of fraud.
According to court documents, Kvashuk worked on Microsoft’s online retail sales platform where he used his IT access to steal digital gift cards and other “currency stored value,” before selling them on the internet.
Although the amounts he stole started off relatively small, totalling around $12,000, they soon progressed into millions of dollars.
Kvashuk is said to have set up test email accounts under the names of Microsoft employees and used Bitcoin mixing services to hide his tracks and the source of the funds entering his bank accounts.
According to the Department of Justice (DoJ) over $2.8m in Bitcoin was transferred to his accounts over the seven months of the scheme. Kvashuk was also able to buy a $1.6m home and a $160,000 Tesla car.
“In addition to stealing from Microsoft, Volodymyr Kvashuk also stole from the government by concealing his fraudulent income and filing false tax returns,” said IRS-CI special agent in charge, Ryan Korner.
“Kvashuk’s grand scheme was thwarted by the hard work of IRS-CI’s Cyber Crimes Unit. Criminals who think they can avoid detection by using cryptocurrency and laundering through mixers are put on notice…you will be caught and you will be held accountable.”
A notorious group behind digital skimming attacks has upped its game recently, infecting at least 40 new websites, according to researchers.
Magecart Group 12, one of many collectives using techniques designed to harvest card details from e-commerce websites, continues to adapt its modus operandi, according to researcher Max Kersten.
The current campaign has been running for several months, with the first hacked site linking to a skimmer domain on September 30 2019 and the most recent infection date being February 19 2020, he explained.
“The skimmer, hosted on jquerycdn.su, changed four times during the campaign. In the four versions of the skimmer that were used in this campaign, the used obfuscation method is the same as in the other reported campaigns,” he continued.
“The first stage loads the actual skimmer script, which is polluted with garbage code. The skimmer itself is different, compared to the first versions. The skimmer grabs all fields from the page, rather than all forms. Although the approach and script are different, the general concept remains the same: obtaining credit card credentials.”
Of the 39 new sites hit by the group, 13 were still compromised at the time of writing, despite being contacted by Kersten. Most appear to be SME-sized retailers who perhaps don’t have many resources to devote to cybersecurity. Consumers are urged not to shop on these sites.
Last month, Kersten and fellow researcher Jacob Pimental revealed how Magecart 12 was targeting ticket re-selling websites for the 2020 Olympics and UEFA Euro 2020 tournaments. Although the domain was taken down, the group simply swapped it for another and continued, highlighting the resilience of the threat, according to RiskIQ.
Tarik Saleh, senior security engineer at DomainTools, urged companies to ensure their underlying operating systems and web frameworks are patched and up-to-date to prevent common exploits running.
“Secondly, it’s important to adjust your web application’s Content Security Policy (CSP) to allow scripts running on it to be from your specific whitelisted domains,” he added.
“Thirdly, I recommend deploying a File Integrity Monitoring (FIM) solution to your website’s directory containing the scripts used for the checkout or payment handling process. FIM solutions are great for monitoring when files have been tampered with or added to your website, and in this case it won’t prevent you from being compromised, but it will let you know if Magecart has been installed.”
It’s believed that Magecart groups had infected over two million websites, as of October 2019.
A controversial facial recognition company has just informed its customers of a data breach in which its entire client list was stolen.
Clearview AI leapt to fame in January when a New York Times report claimed that the start-up had scraped up to three billion images from social media sites to add to its database.
That makes it a useful resource for its law enforcement clients, which can query images they capture against the trove. The FBI’s own database is said to contain little more than 600 million images.
Now those clients have been exposed after an unauthorized intruder managed to access the Clearview AI’s entire customer list, the number of user accounts those companies have set up, and the number of searches they’ve carried out. However, they apparently didn’t get hold of client search histories.
Interestingly, the firm claimed that its own servers, systems and network weren’t compromised.
In a statement sent to The Daily Beast, company attorney, Tor Ekeland, claimed that security is the firm’s top priority.
“Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security,” he added.
Clearview AI is coming under increasing pressure from privacy activists and social media companies.
The latter have reportedly demanded the firm “cease and desist” from its web scraping activity as it breaches their terms of service, although the firm claims it is a First Amendment right to collect publicly available photos.
The firm has also been forced to deny rumors that consumers could also use its service to find out personal information including address details of people whose images they possess.
Tim Mackey, principal security strategist within the Synopsys CyRC (Cybersecurity Research Center), argued that cyber-criminals will now view compromise of Clearview AI’s systems as a priority.
“I would encourage Clearview AI to provide a detailed report covering the timeline and nature of the attack. While it may well be that the attack method is patched, it also is equally likely that the attack pattern is not unique and can point to a class of attack others should be protecting against,” he added.
“Clearview AI possesses a target for cyber-criminals on many levels, and is often the case digital privacy laws lag technology innovation. This attack now presents an opportunity for Clearview AI to become a leader in digital privacy as it pursues its business model based on facial recognition technologies.”
Now is the time to review your exposure to GDPR and CCPA-related lawsuits, and review contracts related to penetration testing.
In a talk at RSA Conference in San Francisco exploring recent cyber-related court cases, Julia Bowen, senior vice-president, general counsel and corporate secretary, The MITRE Corp and Professor Rick Aldrich, cybersecurity policy and compliance analyst, Booz Allen Hamilton, reviewed a number of issues relating to border control, surveillance and online page removals.
“If you are under the GDPR or the CCPA, makes sure you’re doing that correctly,” Aldrich said, referencing cases where page takedowns were disputed by search engines over local laws.
He also recommended checking if you are collecting biometric data, and the legality of doing that, referencing a recent case where the Illinois Supreme Court dismissed a case that would have pared back a state law limiting the use of facial recognition and other biometrics. “If you are doing worldwide business that involves people in Illinois, you may want to check that,” Aldrich advised.
He also recommended reviewing your penetration testing laws, considering the recent case of the Coalfire employees being arrested whilst on an exercise in Iowa.
In the coming months, Aldrich recommended taking actions to update your organization’s policies to minimize risk with regards to personal information, cloud providers and cross-border data transportation. Aldrich and Bowen listed a number of issues related to these cases, including where personal devices are seized and owners are ordered to unlock them.
“If you travel internationally, you may be asked to surrender equipment and risk giving up information to the government,” he said. “If they seize equipment, you may not have it anymore.”
Finally, Aldrich recommended taking actions to update your organization’s policies to minimize risk with regards to insurance providers, especially where payouts were not made due to what was determined to be an act of war. “Some people are now saying that they don’t have an exclusion for an act of war, so be very careful to check that they will pay out,” he said. “There are a lot of companies that are not expecting to pay out $50m when NotPetya occurs.”
It’s time to get rid of parental controls and let younger people make their own decisions.
Speaking in the opening keynotes at the RSA Conference in San Francisco, Wendy Nather, head of advisory CISOs, Duo Security at Cisco, said that parental controls need to be disabled as “we need to teach them to make good security choices for themselves because they need to learn this from a young age.”
As part of her keynote, Nather said that she does not use parental controls at home, but her teenage daughter asked for them to be turned on “to help enforce her study time,” so they were set up for her time, and Wendy controls the password.
“We have to teach them to make good security decisions, as we keep making the same mistakes year after year,” she said, saying this was done with web servers, mobile, and IoT, and this is because of the demographic. “We have to teach everybody, so it doesn’t matter who comes in with new technology, they know how to apply the security controls.”
She concluded by saying that it has to be about “security of, by, and for the people as we’re the ones who have been working on this for decades.”
Speaking at the RSA Conference in San Francisco on how to build a comprehensive Internet of Things (IoT) security testing methodology, Rapid7 IoT research lead Deral Heiland said that it is currently hard to determine what IoT is, so he built a testing model to determine the traits of IoT so they can be better detected and secured.
He said that he often asks companies if they have got any IoT technology, so created a methodology to define the traits of IoT, which is based on four key areas:
- Management control—to control and manipulate data
- Cloud service APIs and storage
- Capability to be moved to the cloud
- Embedded technology
He said that you have the ability to better defend your ecosystem if you know the traits of IoT, and can build a methodology to build and test IoT:
This is about finding information and gathering knowledge, as “there is no way to test your IoT ecosystem if you don’t know how it works.”
Heiland said that once you have done a functional evaluation, you can do a larger reconnaissance to look at what is going on, use open source intelligence to see what frequency the communication is running at and what components it was running, and if they had any notable vulnerabilities and exploits in the past.
The next stage is testing, including web-based penetration tests, scans, and more manual tests of the build, including looking at physical ports. Looking at the firmware, Heiland recommended analysis testing to look for hardcoded keys, passwords, undocumented command structures, IP addresses, and hardcoded URLs of interest. He also recommended doing radio frequency (RF) testing, as most IoT “have some form of this.” This can also determine if the communications are encrypted and effective, and find RF protocols. He also recommended looking at pairing and over-the-air updates.
Heiland admitted that one test does not work for all IoT, and elements will need to be changed for different products, as “you find new things every time and new ways of doing things.”
In one case study, he presented an analysis of a smart door lock. The idea of it is to provide short-term access via email, so he set up a Man in the Middle attack using Burpsuite to create a certificate, “as the mobile app didn’t have SSL, so it was simple to create a certificate and gain Man in the Middle access and see communications flowing back and forth.”
He said that he was able to see the communications, such as how the API returned control keys to all users, which was written to the developer debug log and available via a file on the phone. “We didn’t need to root the device as all of the data was in there, this had a session token so in theory you could control the lock forever.”
He explained that this issue has now been patched, and he declined to reveal the vendor name.
In terms of who can do this sort of testing, he said he would expect a person to be a “seasoned tester at a bare minimum” as well as have hardware skills, budget for kit, and “an endless desire to learn.”
Heiland said there are three elements needed in order to get to a better stage of IoT security. These were for manufacturers to implement a product security testing program and test before a product goes to market and for those that are available, bring them back in-house and test them.
Also, enterprise consumers should ask questions of the vendor, inventory their IoT, define what IoT is to their organization, and assign ownership.
The final element is for IoT researchers and testers to follow Heiland’s methodology and improve their own skills sets.
In a talk at the RSA Conference in San Francisco, students and researchers from University of California, Berkeley presented a theoretical method on how voters could be influenced using technical and automated methods.
Talking about “How AI Inference Threats Might Influence the Outcome of 2020 Election,” the three presented their own research, which included aggregating data to show how misinformation can be spread. Karel Baloun, software architect and entrepreneur at UC Berkeley, said these types of attacks can be nefarious as “attacks on democracy” are often not seen and it can be denied that they took place.
Pointing at the 2016 US presidential election, Baloun said that the hacking of the Democrats’ emails by Russia and passing of them to WikiLeaks “set the narrative for the election” and there is proof that this effort was able to “suppress over 100,000 votes.” He said that there are four examples of elections that have been influenced in history:
- The 2016 Ukraine Election
- The 2016 UK Brexit vote on EU Membership
- The 2019 Hong Kong Anti-Extradition Law Protests
- The 2020 Taiwan Presidential Election
Ken Chang, cybersecurity researcher at University of California, Berkeley, said that when someone registers to vote, that information should be trusted to be held securely, as all information that is collected is “a critical piece of information.”
With voter registration data, Chang said that the potential of a data breach is obvious, so the conversation needs to be centered on how to protect information, and not on how a data broker can collect and distribute information without the person knowing.
Baloun said that with the experiment it was able to build user voter databases and aggregate this into social media data, advertising, and messaging to influence people. Citing the case of Cambridge Analytica, Baloun said that it was able to use Facebook data that was open, and personal information that is freely obtained and available in the form of credit scores and credit card data.
Saying that it is only a matter of time before AI can do the whole process, as currently Machine Learning is used on Big Data sets, and AI can generate texts and emails and write news, Baloun said that the “technology is well advanced.”
“If you suck the firehose you only get what you’re provided,” Baloun said, pointing out that it could be easy for an attacker to impersonate an influential friend or family member.
Looking at steps to take, Baloun encouraged taking more action when friends and family share such information, and think about what you consume. He also called for the Secretary of State with responsibility for voter records to mandate a disclosure requirement. He also called for a ban by the FFC on creating “personal profiles” pretending to be voters.
“Each one can make a big difference, as the system depends on easily available rich voter profiles, and targeting with messaging,” he said. “To protect democracy we need to make things more expensive and less effective and let humans intervene, as they don’t know it is happening.”
How can the US deter other nations from executing cyber-attacks? According to a panel of US government officials speaking at the RSA Conference in San Francisco, there is a range of legal, diplomatic, and even military options that can be considered.
Adam Hickey, Deputy Assistant Attorney General, National Security Division at the US Department of Justice (DOJ), commented that there is a lot that can be done to deter nation-states from conducting cyber-attacks.
"Law enforcement is one tool of federal power and should be used to deter threat actors," Hickey said.
Hickey noted that he knows in many cases even if a state threat actor is charged in a legal indictment, an arrest won't be made. That's why the DOJ is using other legal instruments that can disrupt operations, including court orders to seize infrastructure.
That infrastructure, however, can be anywhere in the world, which is a challenge that Steven Kelly, Chief of Cyber Policy, Cyber Division for the Federal Bureau of Investigation (FBI), brought up. Kelly noted that because of the complexity of cyber-attack infrastructure attribution is often complex.
"Some people might scoff at the idea that we can deter nation-state cyber-attack activity, because the attacks keep happening, but we're working on it," Kelly said.
Kelly added that multiple agencies have been working together to get faster at identifying who is behind an attack and then working together to impose consequences more rapidly. He emphasized that it takes a lot of cooperation within the US government and with other law enforcement groups around the world to get all the facts that enable the FBI to identify threat actors behind an attack.
"Nations and the individuals that are working on their behalf can no longer assume that they can operate with anonymity," Kelly said.
Secret Information and Public Indictments
Among the assets that the US government has engaged to help deter nation-state cyber-attacks is the intelligence community, though much of their work still needs to remain secret, commented Thomas Wingfield, Deputy Assistant Secretary of Defense for Cyber Policy at the US Department of Defense (DOD).
Wingfield noted that while the DOD can't reveal everything about its operations it can and does help other agencies to keep the country safe.
Information from the public is also a key part in helping with deterrence. Hickey commented that in recent years, as companies have matured in their own cybersecurity process, attacked companies have disclosed information to the government that is critical to helping with attribution.
In the final analysis, Wingfield emphasized that deterrence isn't just about lawsuits or projecting power in some way with a retaliatory action. Rather, in his view deterrence is about influencing would-be attackers to make a different decision.
"At the end of the day, deterrence is meant to work in one place, and that is inside the human element, inside of the brain of the adversary decision maker," Wingfield said.
Cyberattacks can impact individuals and companies in different ways, but few if any industries have the same life-or-death impact as medical devices.
In recent years, medical devices and hospitals have come under increasing attack from different threat actors, which has not escaped the notice of regulators in the United States. At the RSA Conference in San Francisco, the safety implications of medical devices was detailed, along with direction on how things could well be set to improve in the years ahead.
"If those vulnerabilities aren't taken care of, devices can potentially be exploited, and that can result in patient harm or serve as a pivot point to get into a hospital network."
The risk to medical infrastructure is far from a theoretical threat. In 2017, the WannaCry Ransomware attack had devastating consequences in the UK, shutting down NHS operations and hospitals. There have also been publicly reported flaws in medical devices that vendors have been slow to fix. Perhaps the most well-known example occurred with Abbott Laboratories and its St Jude cardiac pacemakers.
Chase added that even when patches are available for known issues, patching medical devices is often far from routine, with many hospitals unaware that they are vulnerable.
How Medical Device Security Will Get Better
The US Food and Drug Administration (FDA), together with MITRE and other stakeholders, has been engaged in multiple efforts to improve the state of medical device security. Chase noted that in 2018 the Medical Device Safety Action Plan was published by the FDA, which includes a number of action items for device manufacturers. Among the primary items is a requirement that firms build capabilities to update and patch device security into a product's design. The plan also requires that device manufacturers have coordinated disclosure polices in place in the event of a vulnerability.
Margie Zuk, Senior Principal Cybersecurity Engineer at MITRE, commented that a key challenge with medical device cybersecurity is making sure that the vulnerabilities are understood with the right amount of detail. To that end, MITRE has been developing a Medical Device Rubric for Common Vulnerability Scoring System (CVSS) that has been submitted to the FDA.
Another current effort is to help hospitals build out their preparedness for cybersecurity incidents like WannaCry. Zuk noted that with WannaCry, for example, there was a lot of confusion between hospitals and manufacturers about risk. To help with that type of situation in the future, MITRE has developed a playbook to help hospitals with incident response.
A key challenge for understanding the risk is related to testing under different scenarios. That's where Zuk said that the Medical Device Cybersecurity Sandbox effort comes into play as an effort to help validate vulnerabilities in clinical scenarios.
Software Bill of Materials (SBOM) Will Help
One of the key efforts under way in 2020 is a multi-stakeholder effort led by NTIA for a Software Bill of Materials (SBOM). With SBOM, software in medical and other devices would need to have a list of constituent components that are included.
"SBOM is really critical to understand if you have a vulnerability in your system," Zuk said. "Hospitals need to know what the attack surface is and what's at risk."
Fundamentally, the key to improving medical device cybersecurity is reducing risk and understanding the potential for exploitation.
"It's a shift in thinking about how a device is supposed to be used, to how a device can be exploited by a malicious adversary that it trying to abuse the device, " Chase concluded.
Australian Federal Police (AFP) could be given powers to cyber-spy and hack into online computer systems used by criminals based in Australia under a new proposal being considered by the country's federal government.
Suggested changes would allow the AFP to call for assistance from the Australian Signals Directorate (ASD) or extend the cyber-capabilities of the AFP.
Currently the ASD only has the power to hack, disrupt, and destroy foreign cybercriminal activity, as the agency is banned from spying or hacking into online systems based within Australia.
This situation means that agents who come across cybercriminal activity linked to a server based in Australia must immediately stop investigating it, no matter how serious the offense being committed.
Supporters of the proposed changes say they could help the ASD hunt down sexual predators and pedophiles who use servers in Australia for their cybercriminal activity.
"At the moment, if there is a server in Sydney that has images of a five- or six-month-old child being sexually exploited and tortured, then that may not be discoverable, particularly if it's encrypted and protected to a point where the AFP or the ACIC (Australian Criminal Intelligence Commission) can't gain access to that server," Home Affairs Minister Peter Dutton told the Australian Broadcasting Corporation.
"It can be a different picture if that server is offshore, so there is an anomaly that exists at the moment."
Reports of online child exploitation in Australia have increased massively in the past decade. Last year, the AFP received 17,000 referrals for online child exploitation material, compared to just 300 received in 2010.
A single referral can cover any amount of material, ranging from one image of a child being abused to up to thousands of videos and images.
Dutton said he wanted to put an end to cybercriminals' operating in Australia with impunity.
"We are seeing the rape and torture of our children, all for sexual gratification," said Dutton. "I want to make sure that if they [the police] can get a warrant from a court and go to a pedophile's house and search that house for material . . . I want to make sure we have the same power to do that in the online life of that pedophile."
The US Department of Defense announced yesterday that it has adopted a series of ethical principles regarding the use of artificial intelligence (AI).
Designed to build on the US military’s existing ethics framework, which is based on the US Constitution, Title 10 of the US Code, Law of War, existing international treaties, and longstanding norms and values, the principles will apply to both combat and non-combat functions.
Embracing high-level ethical goals, the principles state that AI should only be used by the DoD in a way that is responsible, suitable, traceable, reliable, and governable.
Under the new principles, DoD personnel will be expected to "exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities," and "take deliberate steps to minimize unintended bias in AI capabilities," according to a statement released yesterday by the DoD.
The principles are based on a set of guidelines on the ethical use of AI published in November 2019 by the Defense Innovation Board. These guidelines—the result of 15 months of consultation with leading AI experts in commercial industry, government, academia, and the American public—were first provided to Secretary of Defense Dr Mark Esper in October.
"The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order," said Secretary Esper.
"AI technology will change much about the battlefield of the future, but nothing will change America's steadfast commitment to responsible and lawful behavior. The adoption of AI ethical principles will enhance the department's commitment to upholding the highest ethical standards as outlined in the DOD AI Strategy, while embracing the US military's strong history of applying rigorous testing and fielding standards for technology innovations."
The principles align with efforts by the Trump administration to advance AI technologies. Last year, President Donald Trump launched the American AI Initiative, a national strategy for leadership in artificial intelligence. The initiative aims to discover and promote innovative uses for AI while protecting civil liberties, privacy, and American values.
New research into malware affecting mobile devices has found that stalkerware and adware posed the biggest threat to users in 2019.
The annual "Mobile Malware Evolution" report, published yesterday by Kaspersky, shows a significant increase in the number of attacks on the personal data of mobile device users. From 40,386 unique users experiencing attacks in 2018, the figure rose to 67,500 in 2019.
Mobile advertising Trojans were a major threat, with the number of detected installation packages that use this type of malware nearly doubling over the course of the year from 440,098 to 764,265. However, researchers found that the rise in attacks was not caused by classic spyware or Trojans, but by a massive spike in the amount of “so-called stalkerware.”
Often promoted as parental surveillance tools, stalkerware apps are installed without the device owner’s consent to secretly stream the victim’s personal information. Devices kitted out with this eavesdropping app will send images, videos, correspondence, and geolocation data from the victim’s device to a command server.
Researchers observed a drop in the number of mobile malicious installation packages detected for a fourth year running. From their peak of 8,526,221 in 2016, the number of mobile threats decreased to 3,503,952 in 2019, which is only 542,225 more than the number of threats detected in 2015.
For the third consecutive year, mobile malware attacks were most prevalent in Iran, where 60.64% of users were affected. The countries with the second and third highest percentages of impacted users were Pakistan and Bangladesh, where 44.43% and 43.17% of users were affected, respectively.
While the number of mobile ransomware Trojans detected rose by 8,186 to 68,362 year on year, one threat that was on the decline was mobile banking Trojans.
"In 2019, we detected 69,777 installation packages for mobile banking Trojans, which is half last year’s figure," wrote researchers.
However, the banking Trojans that were detected were worryingly advanced.
Researchers wrote: "The year 2019 saw the appearance of several highly sophisticated mobile banking threats, in particular, malware that can interfere with the normal operation of banking apps. The danger they pose cannot be overstated, because they cause direct losses to the victim. It is highly likely that this trend will continue into 2020, and we will see more such high-tech banking Trojans."
Nation states are actively attacking digital and internet-connected assets, but whether or not the US and other governments are doing enough to stop those attacks is a burning question that was debated in a session at the RSA Conference in San Francisco.
Sometimes there is a tendency for individuals or even organizations to question whether nation state cybersecurity attacks matter, which is something that Tom Corcoran, head of cybersecurity at Farmers Insurance Group, disagreed with. In his view, whether we like it or not, cyber space attacks matter to everyone now. To reinforce his point, he cited a famous quote attributed to Russian revolutionary Leon Trotsky at the turn of the twentieth century: “You may not be interested in war, but war is interested in you.”
What Nation States Want
The reasons why different nations engage in cybersecurity attacks are wide and varied though Stewart Baker, partner at Steptoe & Johnson LLP, summarized the key threat actors succinctly.
“The Chinese just want to steal everything, Iran is out for revenge and the Russians just want to screw us up,” he said.
Ambassador Timo Koster, ambassador-at-large, Ministry of Foreign Affairs of the Kingdom of the Netherlands, had a somewhat more nuanced view on why different countries engage in cybersecurity attacks. In Koster’s view, there is a link between the nations that attack others over the internet, and what they do to their own people.
“They are largely authoritarian regimes that have a disregard for individual and collective human rights and that is exactly what they do to other nations,” Koster argued.
Liesyl Franz, senior policy advisor, Office of the Coordinator for Cyber Issues at the US Department of State, noted that each nation state has its own motivations for attacks and that all comes into play with how the US and other governments can deter them. She also noted that there are things that the US is in fact doing to deter nation state-backed cyber-attacks.
“Over the last 18 months, we have taken progressively nimble steps to call out nation state behavior in cyber, to attribute malicious cyber-behavior, calling them out and saying why it is bad and what harm it does,” she said.
One such action occurred on February 20 when the US government publicly accused Russia of a major cyber-attack in the Republic of Georgia. Franz noted that the US government isn’t just looking to “name and shame” nation states but rather it is looking to establish a framework for responsible state behavior in the cyber-domain.
“We think that the diplomatic aspect of the public attributions we made may not work today for what happened in Georgia,” Franz admitted.
She added that the next step after public disclosure could be sanctions or legal indictments. Koster added that deterrence in cyber space is a difficult thing and there is a need to have a continuum of responses available to help influence decisions and ultimately deter nation state cyber-attacks.
With cyber-attacks, there is also a large risk from un-intended consequences, which is another challenge that governments will need to consider. One primary example of that risk comes from the NotPetya attack, which has been attributed to Russia as a specific attack against the Ukraine. The NotPetya attack, however, had a much broader, global economic impact.
“Cyber is like climate, it doesn’t stop at the border,” Koster concluded.