With the U.S. 2020 presidential election looming, there is a certain amount of anxiety about the state of election security systems. The federal government has not been sitting idly by, running multiple ongoing efforts, including those led by the Department of Homeland Security's (DHS's) Cybersecurity and Infrastructure Security Agency (CISA).
At the Voting Village within the DEF CON 27 conference in Las Vegas, members of CISA's National Cybersecurity Assessments and Technical Services (NCATS) outlined their mission and their challenges for election security.
"We're here to help secure our nation's election infrastructure," Jason Hill, chief of NCATS at CISA, told the audience.
Hill explained that NCATS offers its services for free to the federal government, as well as to state and local election officials. NCATS conducts cybersecurity assessments before an adversary is known to have breached a system, a point in time that he referred to as "left of boom." He added that NCATS tries to find all of the vulnerabilities it can and has several different services it offers.
One of the primary services is the Cyber Hygiene service, which is an external scan of a perimeter. Genevieve Marquardt, IT specialist at NCATS, explained that the Cyber Hygiene program does not go inside an organization. Vulnerability scanning is conducted with multiple tools, including the open source Nmap tool to identify assets and Nessus, to identify known vulnerabilities. She added that the scans are done continuously and automatically to help organization identify potential security issues.
Another core service offered by NCATS is the Phishing Campaign Assessment, which is a six-week engagement. As part of the engagement, NCATS sends six different emails to a customer, ranging from the Nigerian Prince scam to targeted spear phishing campaign, to see what will get through. Hill commented that there is usually someone that will click on one of the messages, so it's an effective exercise.
Another service offered by NCATS is the Risk and Vulnerability Assessment, a two-week penetration test.
"We have a remote penetration test where all we do is remote assessment work, including web app scanning, external penetration testing and a basic phishing campaign assessment," Hill said.
The other core program offered by NCATS is called the Critical Product Evaluation (CPE), in which equipment is tested and validated. Hill said that CISA is partnered with multiple labs where "the equipment can be sent to let some really smart people tear it down to look for software, firmware and hardware vulnerabilities."
NCATS is getting busier as the 2020 election cycle nears. Marquardt said that NCATS currently has about 1,300 customers. Of those, she noted that 200 or so are elections, but many more are starting to sign up with the elections coming up. NCATS has conducted five full phishing campaign assessments so far this year, with three more in progress. For remote penetration testing, NCATS has completed 25 engagements, with 20 more currently in progress.
Hill commented that NCATS is limited by its resources, but it can scale up through the use of third-party contractors as well.
"What we've done is we've offered to those counties and states that are asking for our services...a cyber-hygiene program. And right now we have a roughly 1,300 customers in our cyber-hygiene program and we can scale that up to about 6,000," Hill said. "There are roughly 3,007 counties in the United States, so if all of them wanted to sign up, they could."
Hill added, however, that NCATS services are voluntary and counties need to make a request in order to get them. While there are concerns and challenges that face counties and elections infrastructure, Hill cautioned that the overall situation isn't terrible.
"There are some good places, it's not all dire, that's not the picture I want to paint, because it's not that bad," Hill said. "There's really no difference between an election system and a normal network system that we test: we find the exact same vulnerabilities in all of the networks that we test."
There are a lot of different risks to personal privacy, but one of the biggest could well be users themselves.
In a session at the Crypto and Privacy Village within the DEF CON 27 conference in Las Vegas, Cat Murdock, security analyst at GuidePoint Security, outlined a nightmare scenario seemingly straight out of an episode of Black Mirror (the session, coincidentally, was titled Black Mirror: You Are Your Own Privacy Nightmare – The Hidden Threat of Paying For Subscription Services).
Murdock detailed how simply having a Netflix account could potentially be the key that enables an attacker to gain access to a user’s banking information. She noted that approximately 60% of the adult population pays for some form of online subscription service, be it Netflix, Spotify or something else. She also noted that everyone with an online subscription has a bank account.
One way a financial institution verifies an account holder when they try to gain access is to verify a recent transaction, which is where subscription services come into play. Murdock observed that there are only so many plans that a subscription service offers and the payments typically recur at the same time every month.
She also noted that a lot of people will comment about their subscriptions on social media, identifying that they just paid again or have continued their subscriptions.
“People love to talk about their subscriptions,” she said. “This is quality open source intelligence [OSINT].”
To test her theory for the presentation, Murdock opened up a new bank account. During the presentation, she played audio recordings of her interactions with the bank, using OSINT and social engineering skills to gain access, which she ultimately was able to achieve.
“It's not your bank’s fault that you use Netflix and it’s not Netflix’s fault that you charge it to the bank,” she said. “It's incumbent on us as users to pay attention to these things, to understand that they're happening.
“Remember that any service provider you use is only responsible for their own privacy terms, and, quite frankly, as we have seen, they don’t always do that well either,” she added.
As a result, Murdock suggested that it is ultimately up to each individual to take care of their privacy themselves. She recommended that individuals be very aware of what they’re choosing to share with the world and who can see it.
“Make sure that you’re owning your own privacy and you know, try and do routine hygiene checks,” she said. “Pick a day every quarter or every month and ask: What am I signed up for? What is new? What am I going to share or did somebody else share something about me?”
Security experts have uncovered major new vulnerabilities in a group hook-up app, exposing private pictures, real-time location and highly sensitive personal details.
Security consultancy Pen Test Partners branded the 3fun app a “privacy train wreck,” claiming the privacy issues it found could end countless careers or relationships.
The app leaked location data right down to the house and building level. Some of the exposed users’ data even put their location on Downing Street and in the White House, although the researchers hypothesized that this could simply be tech-savvy users manually re-writing their position.
“Several dating apps including grindr have had user location disclosure issues before, through what is known as ‘trilateration.’ This is where one takes advantage of the ‘distance from me’ feature in an app and fools it. By spoofing your GPS position and looking at the distances from the user, we get an exact position,” explained Pen Test Partners’ Alex Lomas.
“But, 3fun is different. It just ‘leaks’ your position to the mobile app. It’s a whole order of magnitude less secure.”
Although users can restrict the sending of latitude and longitude information, this is only done client-side, which means the data is still available on the server and can be queried via API, he added.
Also exposed in the privacy snafu were birth dates, private photos – even with privacy settings applied – sexual preference, gender and relationship status.
It goes without saying that such information could be a treasure trove for potential blackmailers. It recalls the furore surrounding adult infidelity site Ashley Madison, where an estimated 37 million customer records were stolen and subsequently used to extort money from victims.
Pen Test Partners contacted 3fun, which fortunately “took action fairly quickly and resolved the problem.” However, the fact that an estimated 1.5 million users may have been exposed on a platform where privacy is crucial will be of great concern.
Cupertino-based NanoSec is described as a pioneer in simplifying app workload protection, with a zero-trust offering that works across multiple computing and containerized environments irrespective of the underlying infrastructure.
The tie-up will enhance McAfee’s MVISION Cloud and MVISION Server Protection products, enabling customers to accelerate the speed of application development whilst mitigating risk, meeting compliance requirements and enhancing governance across hybrid, multi-cloud deployments, the firm said.
NanoSec capabilities set to be applied to apps and workloads in containers and Kubernetes environments include: continuous configuration compliance and vulnerability assessment, plus runtime application-level segmentation for detecting and preventing lateral movement.
“NanoSec’s technology is a natural extension for McAfee MVISION Cloud, enhancing our current CASB and CWPP products, and adding to our ‘Shift-Left’ capabilities to deliver on the DevSecOps best practice to improve governance and security,” argued Rajiv Gupta, senior vice president and general manager of the cloud security business unit, McAfee.
“NanoSec’s team brings a wealth of experience to McAfee, and together we are committed to enabling organizations to reach their full cloud potential.”
The acquisition is a timely one considering the growing popularity of containers in DevOps organizations.
Gartner said in April that by 2022, over three-quarters of global organizations will be running containerized applications in production, a major increase on today’s figure of fewer than 30%.
Yet although containers represent a great way to speed up app delivery, modernize legacy apps and create new cloud-native ones, current ecosystems are immature and security must be embedded in environments across the entire life cycle, the analyst claimed.
“Although there is growing interest and rapid adoption of containers, running them in production requires a steep learning curve due to technology immaturity and lack of operational know-how,” said Arun Chandrasekaran, distinguished VP analyst.
“I&O teams will need to ensure the security and isolation of containers in production environments while simultaneously mitigating operational concerns around availability, performance and integrity of container environments.”
South Wales Police is set to begin a trial of controversial facial recognition technology this month, even as rights groups challenge its legality in the courts.
The police force is reported to be using hardware from NEC and an in-house developed software UI to provide it with a second set of eyes to scan crowds of people and identify those that may be on a watch list.
The app-based automatic facial recognition (AFR) system measures the distance between individuals’ facial features to match those on the list with people in a crowd.
However, it has been heavily criticized: a report from Big Brother Watch last year claimed that false positives in a trial by the Metropolitan Police reached 98%, while South Wales Police stored images of 2400 innocent people incorrectly matched by AFR for a year without their knowledge.
A Cardiff man was given the green light in July to launch a High Court challenge to the police force’s use of AFR, claiming it violates the privacy rights of everyone within range of the cameras and discourages peaceful protest, as well as discriminates against women and BAME people.
Rights group Liberty, which is representing the man in court, claimed that the new three-month trial by South Wales Police, set to start this month, was “shameful” considering the ongoing legal challenge.
Jason Tooley, board member of techUK and chief revenue officer at Veridium, argued that police need to be more strategic in their use of biometrics, combining multiple approaches.
“This strategy would take advantage of other biometric techniques such as digital fingerprinting which ensure a higher level of public consent due to the maturity of fingerprints as an identity verification technique,” he added.
“It’s clear that alleviating privacy concerns need to be prioritized by the police within the overall strategy for using technology in this area. The public need to be able to see the value of the technology innovation through results in order to advance consent and acceptance by citizens.”
“No policy makers understand technology,” declared Schneier. “Technologists are in one world, and policy makers are in a different world. It’s no longer acceptable for them to be in separate worlds though as technology and policy are deeply intertwined.”
But technologists and policy makers don’t understand each other, said Schneier. “They speak different languages, they make different assumptions and they approach problem solving differently.
“Policy security has been pushed to the side. [There is] no regard for what has been built and the effect it will have.
“As internet security becomes everything security, the technology we make becomes important to overall policy. We can’t get policy right if policy makers get the technology wrong.”
To fix this, suggested Schneier, policy makers need to understand technology. “It seems impossible but it’s vital.” All policy decisions need to be made with technology in mind, he said, and policy makers need technologists on their staff. “All the major policy debates of this century will have strong cybersecurity influences,” he predicted.
To get more technologists involved in policy, Schneier suggested the answer is to get “more public interest technologists,” though he did admit that it’s still a developing term. “A lot of people doing it came out of the Obama White House," he said.
“In the last century, the people doing public policies needed to be economists. Today, people doing public policy need to be technologists,” he insisted.
Schneier also called out supply chain security as being in desperate need of technical expertise. “It’s insurmountably hard. You can’t trust anyone but have no choice but to trust everyone.
“Our industry is deeply international and any policy issues can’t just make snap decisions to ban certain technologies.” Elections too, he considered, “could use a lot of public interest technology and technologist input.”
Governments and corporations need to work together to form these jobs, said Schneier, adding that “society needs to understand that what is in the best interest of corporations isn’t necessarily in the best interest of society.” Further, he added, “technology is not politically mutual.”
Reflecting on the world we’ve built, Schneier considered “we’ve built a world where programmers have the inherent power to build technology as they see fit. That privilege needs to end. The next big disruption on the internet will not be about people, but about things. Things talking to each other and getting rid of the need for human interaction.”
As technologists, he said, we have a lot of power. “As consumers however, we don’t. As employees we have an extraordinary amount of power and we need to use that power inside companies to make change happen fast.”
The Government is largely advocating its work in this space, Schneier said, “but when IOT starts killing people, they will have to take notice.”
A movement needs to be created in the industry to better deal with the issue of fear, uncertainty and doubt (FUD).
Speaking at the Diana Initiative conference in Las Vegas, security engineer Olivia Stella explained that the term “FUD” was coined in the 1970s and used as a tactic for a potentially lost customer, “as it distilled fear into everyone.”
Stella said that FUD “is like calling fire in a crowded building: we just want the truth out and we talk on a daily basis about wanting transparency and truth and not FUD to confuse people.”
Looking at 50 years of technology, Stella said that in the 1970s there was “little to no technology and now it is everywhere and it is all connected to the internet.” Now kids have access to technology and “are born with technology in their hands.” However, there is a danger, she said, of “security fatigue,” where we are told of the constant problems in technology. “Add in the 24/7 news cycle,” she said, and it can be very overwhelming.
“How do we fight? Not with more technology but with education,” she said. “This needs to start for kids as it is the new sexual education.” She praised the partnership between the Girl Scouts of America and Palo Alto Networks to engage people and help their family and friends learn by proxy.
Stella said that there was need for better communication internally, with hard facts distributed “and to be an advocate to get the true data out there.” Also large companies need to do communications that are correct and timely, and need to train people outside of the security department on when and how to release info to public or internally.
She concluded by saying that the fight against FUD will be done when there is security education in place, “an area of passion to start....We need to have advocates, and I like to practice what I preach.”
Asked by Infosecurity if she would like to see more companies join her fight, she agreed, saying, “If they are saying that their product offers a service and it doesn’t, that contributes to FUD.” She also encouraged those with the ability to communicate via social media to do so and to ask the right questions.
The Dutch Tax and Customs Administration had a problem, their domain names were being abused in phishing campaigns and they had to figure out a way to fix the issue. As it turns out the solution is all about implementing standards that already exist, to help minimize risk and improve overall email hygiene.
At a session at Black Hat USA in Las Vegas, titled, 'How to detect that your domains are being abused for phishing attacks using DNS, Karl Lovink, technical lead for the Dutch Tax and Customs administration and consultant Arnold Holzel outlined the standard and techniques they used to combat phishing.
"Our main objective was trying to find phishing campaigns as quickly as possible," Lovink said.
There are no shortage of technologies that can be used to combat phishing, but the key for Lovink was to take a path that didn't impact business operations and more importantly is based on existing standards.
Among the multiple standards that can help to improve overall email security is STARTTLS, which is a specification that is used to upgrade an unsecure email server connection that isn't using TLS (Transport Layer Security) to one that is. The risk of not using TLS is that connections are not encrypted and data is sent in the clear.
STARTTLS however isn't the only way to get a TLS connection for email servers. There is also a specification known as DNS-Based Authentication of Named Entities (DANE), which enables a domain name server (DNS) to supply information about TLS support for a given domain through a resource record.
Another key standard outlined by Lovink is Mail Transfer Agent Strict Transport Security (MTA-STS). He explained that MTA-STS allows a receiving domain to publish their TLS policies to help ensure secure connections.
Looking beyond standards that can ensure security for email delivery with TLS are a series of standards for helping to enforce the integrity and authenticity of incoming and outgoing email. Lovink explained that the Sender Policy Framework (SPF) validates if an email is sent from a valid IP address or domain, by checking against an SPF record that is stored in domain's DNS records.
For outgoing email, there is the DomainKeys Identified Mail (DKIM) standard that digitally signs outgoing mail to prove that it came from the right domain. Lovink said that the digital key for DKIM is also stored as a DNS record.
Tying SPF and DKIM together with an additional layer of reporting is the Domain-based Authentication, Reporting and Conformance (DMARC) specification. Lovink commented that DMARC provides direction and visibility into how to deal with the results of SPF and DKIM reports.
Both Lovink and Holzel commented that overall there are some configuration complexities in some cases with each of the standards, but it's important for organizations to implement them to improve email security.
"You really have to implement standards if you want to prevent phishing attacks," Lovink said. "We are convinced that if everyone implemented these standards, there will be a lot less phishing in the world."
Speaking at the Diana Initiative conference in Las Vegas on “Working Remote Can Be Overwhelming and Lonely, Let's Change That,” Suzanne Pereira acknowledged that working from home can be attractive, as you “don’t get dressed and can do errands all day and go to lunch with friends,” and you will be told “you’re the luckiest person ever.” However, Pereira, whose 12 years in infosec include 10 years of working remotely, said that the reality is you can be pulled in many different directions at once, which can lead to burnout.
She said: “You can feel lonely and isolated and feel stressed." You wonder "if you’re doing something wrong as everyone tells you you’re lucky....Why do you not feel that way and why are you always stressed out?”
She recommended setting yourself guidelines of creating a working space out of the way and setting time limits for when you are working. Yet she acknowledged that this is an industry “where we like to learn and grow and research and work on something to make you better,” so when you work from 8 am to 11 pm, be clear that this is your own decision.
She recommended taking travel opportunities and joining video conferences to form better relationships. She also recommended saying no when appropriate. It can be a scary word, she said, but use it to be your own advocate to avoid taking on “something you cannot finish.”
“Also have no-calls or -messaging time, as [not having it] leads to burnout,” Pereira said. “Do a 9–5 and take no calls after that....If there is a message you will look at it tomorrow.”
The right balance can lead to “being less overwhelmed," Pereira concluded, "and it takes a lot of effort to say no, but the balance leads you to being less overwhelmed.”
Asked what a company can do to make life better for remote workers, Pereira said that companies should incorporate remote workers. “Don’t leave them on an island, as you may think you’re doing them a favor but they may hate it.”
Saying that “no one has a straight path in the career,” Kathleen Smith asked the panelists, two of whom had military experience, how they had started their careers.
Andrea Limbago, who is a doctor of political science, had worked in academia and had moved on to startups in cybersecurity, said “For all of us playing a role in preserving democracy,” one of the “missions of our time was to ensure retention of women in their jobs," especially as there was more impact of cybersecurity on society. She noted that the other two panelists had got into cybersecurity via the Department of Defense.
Yolonda Smith, who works as a lead infosecurity analyst with Target, said her first interaction with technology was with a computer as a child, which she smashed when frustrated with a game. This led her to learn how it was put together.
She said: “IT is a capability and there are specialized training and certifications and the opportunity to deploy. There is the opportunity to ask and be curious.”
Susan Peediyakkal, a cyber-threat analyst who said she is currently on a career break and had spent 12 years in the military starting as a radar technician, was asked about education and certificates and whether to focus on experience “or letters on your résumé.” She said she had not finished her bachelor's degree but had done a course in eCornell for women in leadership and was starting with Carnegie Mellon University to do a CISO supervision course.
Said Limbago, “If you don’t keep learning and coming to conferences like this, you will be left behind.” Yolanda Smith responded that there is “an obligation as professionals in this field to seek opportunities to educate yourself,” which could be a certification or a boot camp, but it was “up to you to craft your message.”
Asked by Kathleen Smith how she could evaluate opportunities about a move into management, Yolonda Smith said it is about the understanding of “going to work and fighting to be heard and respect,” what opportunities there were for her, if there were things she could learn and if there were skills she could learn and apply that would always be of interest.
Kathleen Smith asked about prohibitive factors on job descriptions and how they could be overcome. Peediyakkal said that she looks at job descriptions and “I don’t let them intimidate me as I go for it anyway” if she doesn’t have all of the skills if it is a job she wants.
Concluding by giving their current mottos, Yolonda Smith said it was “never measure someone else by your yardstick.” Don't get frustrated by what others are doing and think “how come she got this?" she said. Instead, "make your next step yours.”
Peediyakkal said her was to “be humble.” One can be the ultimate high, while the next you can be “super frustrated." She added, "Never take any moment for granted.”
Limbago said that hers was to “push yourself and try something new” as what got you into a previous position may not work again. Kathleen Smith said that it was important that we be “comfortable with being uncomfortable.”
Apple’s decision to offer a $1m bug bounty has been criticized as potentially creating collusion opportunities and perverse incentives.
According to The Verge, Apple announced that it has expanded its existing bug bounty program to include macOS, tvOS, watchOS and iCloud. It will include rewards of up to $1m for a zero-click, full-chain kernel-code-execution attack.
Previously a maximum $200,000 payout, the $1m payout will be for iOS vulnerabilities that let attackers control a phone without any user interaction.
Another $500,000 will be given to those who can find a “network attack requiring no user interaction,” reported Forbes.
Speaking to Infosecurity, Luta Security CEO Katie Moussouris said that she was concerned about raising it to this level “as it will probably have some unintended perverse incentive consequences,” because she said that this “does nothing to compete with the offense market.”
Moussouris argued it also may also produce collusion with internal employees. Thirdly, she was concerned that this “may eventually cannibalize Apple's own hiring policy and its career retention pipeline” as if there are quality assurance engineers who feel that this is their only chance to earn big, having earned enough to know enough about the architecture. “It would be a good investment for them; when else would you get a windfall like that?”
She said that “perverse incentives in the offense and defense market have to be examined very carefully because this is a price hike that is unsustainable.” While this may produce new exploits and new talent willing to work for defense, the overall impacts on the bug market are yet to be seen “and I am worried.”
The original bug bounties were $500 from 1995 to 2010, with 2010 seeing the first Google bug bounties, which started at $1,337 and which led to Mozilla raising its bug bounty to $3,000. Prices were then raised across the board.
“People thought the more, the merrier; this is what every company should do – keep raising the prices. But if you think about it, there is a logical limit which defensive prices cannot exceed because if you exceed them you start to see perverse incentives emerge,” Moussouris said. “I think the offense market, also known as the black market, will very quickly adjust.”
Ransomware detections soared by 365% year-on-year in the second quarter of 2019, according to the latest report from Malwarebytes.
This figure is even higher than the 235% increase in overall threats aimed at businesses from 2018 to 2019, the security vendor claimed in its latest quarterly threat report, Cybercrime techniques and tactics (CTNT): Ransomware retrospective.
At the same time, consumer ransomware detections continued to decline, by 12% year-on-year, as hackers turn their attention to higher value targets.
Among the most frequently targeted organizations in Q2 were US cities, healthcare organizations (HCOs) and schools and universities. Legacy IT infrastructure and a lack of funding for security initiatives has left these sectors particularly exposed, Malwarebytes claimed.
Among the most prolific ransomware strains targeting organizations in Q2 were Ryuk, with detections increasing 8% from the previous quarter, and Phobos, which witnessed massive growth of 940% from Q1 2019.
GandCrab, Troldesh, Rapid and Locky were also notable in the quarter, although GandCrab detections slowed by 5% as new ransomware-as-a-service strain Sodinokibi took over using similar components.
Unsurprisingly, the US was the biggest victim globally, accounting for 53% of attacks, followed by Canada (10%) and the UK (9%).
Nearly half of all detections in 2018 happened in North America, with EMEA accounting for 35%, Latin America 10% and APAC 7%, according to the report.
“This year we have noticed ransomware making more headlines than ever before as a resurgence in ransomware turned its sights to large, ill-prepared public and private organizations with easy to exploit vulnerabilities such as cities, non-profits and educational institutions,” said Adam Kujawa, director of Malwarebytes Labs.
“Our critical infrastructure needs to adapt and arm against these threats as they continue to be targets of cyber-criminals, causing great distress to all the people who depend on public services and trust these entities to protect their personal information.”
Transport for London (TfL) was forced to temporarily suspend the website for its Oyster system this week after an apparent credential stuffing attack on customers.
The top-up card allows users to travel around the capital on Tube, bus and Overground services, adding to and checking their balance online or at ticket machines.
However, the website is currently ‘down for maintenance’ and a statement from the transport service suggests a credential stuffing attack.
“We believe that a small number of customers have had their Oyster online account accessed after their login credentials were compromised when using non-TfL websites,” a spokesperson claimed.
“No customer payment details have been accessed, but as a precautionary measure and to protect our customers’ data, we have temporarily suspended online contactless and Oyster accounts while we put additional security measures in place. We will contact those customers who we have identified as being affected and we encourage all customers not to use the same password for multiple sites.”
Credential stuffing is an increasingly popular tactic, exploiting the huge volumes of stolen passwords on the dark web and the fact that users tend to reuse these log-ins across multiple sites. A hacker only has to get lucky 1% of the time to reap a decent ROI from these automated attacks. Attacks are estimated to cost EMEA firms as much as $4m each year.
Dashlane CEO, Emmauel Schalit, argued that password management has become too difficult for the average internet user.
“Dashlane has found that the average internet user has over 200 digital accounts that require passwords, and the company projects this figure to double to 400 in the next five years. Managing passwords for them all has become incredibly hard,” he explained.
“We then bury our heads in the sand and ignore this problem, use the same password everywhere thinking everything is fine, and then we get hacked. Everyone should have a unique password for every one of their digital accounts. This ensures that even if one account is breached your other accounts will be secure. This is the digital version of the ‘containment’ doctrine; if one account is compromised the damage will not spread.”
The answer is to use a password manager, he added, although best practice should extend to switching on two-factor authentication for all websites and apps.
Symantec has announced the $10.7bn sale of its enterprise business to chip giant Broadcom, whilst cutting around 7% of its global workforce.
The deal will see the Symantec name also taken by Broadcom, while the security firm will retain its Norton LifeLock business.
Broadcom said it expected the cash deal to close in Q1 2020.
“This is a transformative transaction that should maximize immediate value to our shareholders while maintaining ownership in a pure play consumer cyber safety business with predictability, growth and strong consistent profitability,” said Rick Hill, interim president and CEO at Symantec.
“In addition it allows the Enterprise Security business to grow and compete on an enterprise platform with a worldwide sales and distribution reach which can service our existing customers.”
The deal is the latest in a raft of takeovers as Broadcom seeks to establish itself as a major IT infrastructure player, expanding beyond its core competence of processors. Over the past few years it has bought Brocade for $5.5bn and CA Technologies for nearly $19bn.
However, its attempts to buy US chip giant Qualcomm faltered after the influential Committee on Foreign Investment (CFIUS) said it may create a national security risk. That’s despite Broadcom’s decision to move its headquarters from Singapore to California last year.
The firm is looking for additional revenue streams from software, as it has reportedly been particularly badly hit by the US-China stand-off, with Huawei one of its biggest customers.
It’s not believed the CFIUS will be looking at the Symantec deal.
Also yesterday, Symantec announced its Q1 financials, which included plans to close various facilities and datacenters and cut around 7% of its global workforce.
“The company estimates that it will incur total costs in connection with the restructuring of approximately $100m, with approximately $75m for severance and termination benefits and $25m for site closures. These actions are expected to be completed in fiscal 2020,” it reported.
In a panel at Black Hat USA, former members of the hacking collective Cult of the Dead Cow were joined by author Joseph Menn, who wrote the recent memoir Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World.
Asked about the legacy of the hacking group by Menn, former members Christien 'Dildog' Rioux. Peiter 'Mudge' Zatko and Luke 'Deth Veggie' Benfey said that there was an issue with claims made that “passwords were uncrackable” and “buffer overflow” doesn’t work in Windows.
Dildog and Mudge, who were also members of the Boston-based hacking group L0pht, said that the two groups existed and drove each other, as Dildog said that “it was good for the L0pht as it was driven by Cult of the Dead Cow and acted like marketing for L0pht” and the work done “provided a good opportunity to do something technical, and raised the level of discussion.”
Deth Veggie explained that the intention of the group was to “try to make a difference” as operating systems were marketed as being safe, and they were not “and they were forced to change.” Dildog said that patches were usually issued once or twice a year, and the work done by the groups forced the change.
Menn asked the three panelists about the move to hacktivism, which was preceded by “not for profit hacking,” which Deth Veggie said was inspired by discussions of having “power and influence” and where it could be leveraged. “We used it to go through the media and through technology and influence” he said, and working with human rights organizations “showed us a way to influence to focus our message.”
Deth Veggie acknowledged that there are still disagreements about what is valid hacktivism and what is not, as “some believe denial of service is now applicable as a means of protest and it is still going on.”
Mudge added that the Cult of the Dead Cow was “drawn to environments that were complex” and it was about “opening doors so others can do it.”
Asked by Menn if the Cult of the Dead Cow were to come together now, what form could it take, Mudge pointed at Germany’s Chaos Computer Club who have followed the model “with policies and steps for government,” while the Cult of the Dead Cow “opened the door for me and hopefully for other people.”
The subject of former member, and current democratic candidate Beto O’Rourke, was raised, after it was revealed earlier this year that he had been a member of Cult of the Dead Cow. Deth Veggie said he had seen what Beto had written about “and this influenced my own views and other viewpoints than my own development,” while Mudge said that he was a “friendly guy.”
DevSecOps isn't just yet another meaningless buzzword, it's an approach that has a number of steps and real technologies that can be used to help effectively reduce risk. That's the message coming out of a session at the Black Hat USA conference in Las Vegas titled, "DevSecOps: What, Why and How."
Anant Shrivastava, regional director for Asia Pacific at NotSoSecure explained that an idealistic goal for many organizations is to be secure by default. DevSecOps is an approach that integrates security via tools into both the developer and operations workflow and can help to create a culture of security as code within an organization.
"DevSecOps makes it easier to manage the rapid pace of development and large scale secure deployments," Shrivastava said. "Security has to be part of the process, it can't be a step that only occurs at the end."
In the modern DevOps approach to code development, a developer builds code in an IDE (Integrated Developer Environment), checks code into a source code repository and then moves code to a continuous integration, continuous deployment server out to production deployment. Shrivastava said that at each stage of the DevOps process there are tools and controls that can be utilized to enable better security.
The first step in the DevSecOps pipeline is to have what Shrivastava referred to as "pre-commit hooks" for a developer's workstation to make sure that sensitive information such as access keys are not directly integrated into code commits. IDE plugins can also be used to help developers identify potential bugs in code that could lead to exploitable vulnerabilities.
Software Composition Analysis (SCA) is another key step for developers embracing a DevSecOps model.
"We don't write software as much as we build on frameworks with the biggest portion of software now being third party libraries," Shrivastava said. "Software Composition analysis performs checks to identity vulnerable in outdated third party libraries."
Static analysis is the next step in the DevSecOps pipeline. Shrivastava explained that static analysis tools enables automated code review that can find software defects such as SQL injection and Cross Site Scripting (XSS). Static analysis runs on code that is not running. The corollary is Dynamic analysis, which looks to identify defects in running code.
Moving from development into production, DevSecOps also seeks to help secure the infrastructure that is used for application deployment. That where the idea of having security defined as code within infrastructure fits in.
"Infrastructure as code allow you to document and have version control for infrastructure," he explained. "It allows you to perform an audit on the infrastructure and the whole environment can be as secure as the base image."
Having all the different DevSecOps controls and tools in place can also add a new layer of complexity as each of the tools has its own report format. That's why Shrivastava said that there is also a need for vulnerability management in the DevSecOps pipeline, to act as a central dashboard for all the different reports.
Finally, even with all the various tools to check code at different levels, vulnerabilities will still get through. Shrivastava said that once code is deployed, it's imperative to have alerting and monitoring tools in place to see if anything malicious is still able to get through.
"We work under the whole assumption that anything can be hacked, but you can still make life miserable for the attackers, that's the game," he said.
The GDPR (General Data Protection Regulation) is supposed to help individuals keep their information private, but as it turns out, it could also potentially serve to help attackers as well.
In a session at the Black Hat USA conference in Las Vegas, titled, "GDPArrrrr: Using Privacy Laws to Steal Identities", James Pavur, DPhil student and Rhodes Scholar at Oxford University, outlined how he was able to abuse a key component of the GDPR to get access to personally identifiable information for his fiance.
Pavur said that there are multiple exploitable properties of GDPR, that a social engineering attacker could seek to exploit. The first is fear of non-compliance, since GDPR prescribes large fines if there is a violation.
GDPR also has tight timelines for disclosure and compliance which puts pressure on organizations. There is also a certain amount of ambiguity in the actual language of the regulation. Finally, much of the response to GDPR requests involves humans due to the complexity of the process.
The weak point in GDPR targeted by Pavur is the Right of Access provision, which gives European citizens the right to request all of their data from a given provider that holds information on them.
Using a simple email, that included basic information, such as name, email and phone number, Pavur sent off requests to over 150 organizations to see what kind of response he could get, and ended up getting some surprising results.
While 39 percent of requests were denied, with providers requiring stronger forms of identification than just an email and a phone number, 24 percent of providers gave Pavur the information he requested, while an additional 16 percent accepted the request but ask for an additional weaker form of authentication which he was able to provide.
Only 13 percent of organizations just ignored the request outright, while shockingly three percent ended up deleting the account in question, rather than have to deal with the request at all. Pavur said that the account deletion was not something he had expected and could potentially be used as a form of identity denial of service attack.
The ambiguity in the GDPR language is that the regulations state that the requestor has to provide "reasonable" ID verification. Different organizations asked for different verification, ranging from something as simple as a signed letter or even just being able to answer a knowledge question about the user. Fundamentally though, Pavur said that most organizations simply just don't have the ability to verify the documentation that they ask for in any case.
The information that Pavur was able to get from his data requests also varied, with a major hotel chain for example providing data about all of the target user's stays at the hotel. Another provider sent him more sensitive information including the target's social security number.
While there are challenges with GDPR's Right of Access, Pavur also provided a few recommendations for what organizations can do to help protect themselves and their users information from fraudulent data requests.
The first and most basic suggest Pavur offered is for companies to just say no to suspicious GDPR data requests. He said that potentially if the request is real, it could land the provider in a court room, but it's better than giving out customer information to an attacker. He added that if the provider is also able to demonstrate that they were acting in good faith, the risk is also reduced.
Pavur also suggested that for legislators, it's important to clarify what appropriate forms of identity are and it's also critical to provide government-mediated identity verification services.
"The core point is that privacy laws should enhance privacy not endanger it," he said.
Revealing new research around the Russian dark web, Ariel Ainhoren, research team leader at IntSights told Infosecurity that local websites to Russia were a “unique part of the dark web” due to local laws and government influence.
Ainhoren pointed to several sites on the dark web, which he said “look like any other sites” and some of which are available on the surface web. He explained that the first website, hackzone.ru, was started in 1997 as there was a common Russian mentality to do things yourself. This, he said, led them to start their own discussion boards.
Another website, named Exploit.in. was started in 2005 and now has around 45,000 users. While only requiring a registration to use it, is available on the surface web. “It became an industry and became a pyramid,” Ainhoren said, saying that malware such as the Gandcrab ransomware was created on Exploit.in and distributed further via layers of middlemen.
He said: “It’s a business model. It started as a nice place to talk and switch ideas, and it is growing all of the time.”
Another website that Ainhoren showed Infosecurity had a thread with a working exploit for the Bluekeep vulnerability.
Asked if there were common rules among the users, Ainhoren said that there is an understanding of not attacking other Russians or Russian websites, or anything in the former Commonwealth of Independent States (CIS). In another case, a Syrian was hit with ransomware and after saying they were unable to pay the ransom, a filter was added so certain ransomware could not infect anyone determined to be from Syria.
“It’s an issue of nationality” Ainhoren said, saying that as we saw with the Crimea conflict, there is freedom to attack USA and European domains.
He also said that Russian authorities often turn a blind eye to these websites, and will not take them down as they “align with Russian government interest.”
He said that the Russian internet was built as a free network, and closed down over the years by a series of laws which restricted the freedom of the internet, and insisted on only using local VPNs and verifying SIM cards.
“For the dark web, it means a lot more anonymity. On one hand the government can turn a blind eye, and on the other close in on them and be more aligned with Russian interest,” Ainhoren said. “The dark web is a wealth engine that brings in money.”
The statistics for gender diversity in the industry, Lynch pointed out, are worrying. Not only is the industry not seeing positive trends in this space, but actually in many areas we are seeing worsening statistics. For example, there has been a steady decrease in women graduating with computer science degrees over the past 35 years.
Perhaps more worryingly, women exit the cybersecurity industry within a decade at twice the rate of men. Of those leaving the industry, 77% cited extreme pressure and a “hostile ‘macho’ culture” as their reasons for doing so.
Lynch blames implicit bias, amongst other things, for this trend. Examples of this are the male-orientated language used, crediting an idea to the wrong person, underestimating ability and making incorrect assumptions about someone else’s role,” she said.
There is also the stereotype threat, she explained. “There is a fear that one will fulfill existing and negative stereotypes,” said Lynch. “This is proven to increase anxiety and decrease productivity and performance.”
To counteract this, Lynch suggested an increase in visibility of women at all levels. “It’s important to convey the high value of diversity.” She also suggests mentors and sponsors providing endorsement and advocacy will make a positive difference.
“It’s a complicated problem but the solutions are simple,” concluded Lynch. “It comes down to empathy and showing up for one another.”
Speaking at Black Hat USA, Google Project Zero manager Ben Hawkes looked back at five years of the vulnerability research team and deemed the future success of the group to be focused on more groups forming.
Looking back at the formation of Project Zero, Hawkes said that there was a sense that the zero-day was a problem “for Google and society as a whole” and there has since been a shift for zero-days to be beneficial for offensive security. “So after five years, the question to ask is, is zero-day hard yet?”
Hawkes said that Project Zero was founded on principles including “good defense [which] requires a detailed knowledge of offense” and looking at the software that we rely on, not just Google Chrome and Android.
“When you think of Project Zero, autonomy comes to mind,” he added. “We are all bound by a mission and principles, and the key innovation is researchers have individual freedom to pursue their own independent research agenda.”
He explained that the research includes: 54% manual review, 37% fuzzing, and 8% other types of testing. He also said that part of performing vulnerability research is what new methodologies you can create that the researchers did not have access to previously, and by “writing an exploit, you’re walking in the shoes of an attacker.” The development of an exploit requires five steps:
- Ensure that the security impact of the bug is well understood
- Establish an equivalence class of similarly exploitable vulnerabilities
- Generate appropriate amounts of urgency
- Surfaces new and improved exploit techniques
- Allows us to find areas of “fragility” in the exploit
Hawkes said that Project Zero is in a positon “to advocate for change” and a lot of the job is spent working out “how to be an advocate and what the vendor wants to achieve.”
Looking back at some of the research, Hawkes called the work around Spectre and Meltdown as “a moment” as it changed the way we think about hardware security, and led to substantial architecture changes and marked a redoubling effort to invest in security and build up processes and testing.
“On a side note, vulnerability research has been well received and led to structural improvements” and he thanked the vendors and open source community for the work done.
Looking at how to measure the “hard” element of zero-day research, Hawkes said that you can gauge it by the number of vulnerabiltiies, or how many exploits are sold on the “grey market,” or the number of vulnerabilities debugged. “We made an attempt to find something better and more aligned,” he said.
“Instead of marketing it about zero-days being hard, we need to step back and decide what does progress towards hard mean?
“Is it hard? The truth is it is harder, but not hard. If I could stand up and say in five years we are leading to an accomplishment that would be great, but we’re not there yet.”
Hawkes also explained that open attack research “provides the best path for making zero-day hard” and there is “something compelling and powerful in doing work that teaches users to do the right things.”
Looking forward, Hawkes said that we will never finish debating on vulnerability disclosure, and this can be done well “and can be profoundly impactful, but if done poorly there can be systemic risk.” He added that he sees this as an urgent problem, and if people can be promoted and empowered and connected with external researchers, this can “create a pipeline of work that leads to collaboration.”
Concluding, Hawkes said that the way forward is for other companies to follow the Project Zero model, and create their own research teams and “expand the amount of open attack research.”
He said: “We need to focus on our mission and principles and find an area where we see eye to eye as vulnerability disclosure is a distraction, and we need to focus on the common mission and principles.”