The GDPR (General Data Protection Regulation) is supposed to help individuals keep their information private, but as it turns out, it could also potentially serve to help attackers as well.
In a session at the Black Hat USA conference in Las Vegas, titled, "GDPArrrrr: Using Privacy Laws to Steal Identities", James Pavur, DPhil student and Rhodes Scholar at Oxford University, outlined how he was able to abuse a key component of the GDPR to get access to personally identifiable information for his fiance.
Pavur said that there are multiple exploitable properties of GDPR, that a social engineering attacker could seek to exploit. The first is fear of non-compliance, since GDPR prescribes large fines if there is a violation.
GDPR also has tight timelines for disclosure and compliance which puts pressure on organizations. There is also a certain amount of ambiguity in the actual language of the regulation. Finally, much of the response to GDPR requests involves humans due to the complexity of the process.
The weak point in GDPR targeted by Pavur is the Right of Access provision, which gives European citizens the right to request all of their data from a given provider that holds information on them.
Using a simple email, that included basic information, such as name, email and phone number, Pavur sent off requests to over 150 organizations to see what kind of response he could get, and ended up getting some surprising results.
While 39 percent of requests were denied, with providers requiring stronger forms of identification than just an email and a phone number, 24 percent of providers gave Pavur the information he requested, while an additional 16 percent accepted the request but ask for an additional weaker form of authentication which he was able to provide.
Only 13 percent of organizations just ignored the request outright, while shockingly three percent ended up deleting the account in question, rather than have to deal with the request at all. Pavur said that the account deletion was not something he had expected and could potentially be used as a form of identity denial of service attack.
The ambiguity in the GDPR language is that the regulations state that the requestor has to provide "reasonable" ID verification. Different organizations asked for different verification, ranging from something as simple as a signed letter or even just being able to answer a knowledge question about the user. Fundamentally though, Pavur said that most organizations simply just don't have the ability to verify the documentation that they ask for in any case.
The information that Pavur was able to get from his data requests also varied, with a major hotel chain for example providing data about all of the target user's stays at the hotel. Another provider sent him more sensitive information including the target's social security number.
While there are challenges with GDPR's Right of Access, Pavur also provided a few recommendations for what organizations can do to help protect themselves and their users information from fraudulent data requests.
The first and most basic suggest Pavur offered is for companies to just say no to suspicious GDPR data requests. He said that potentially if the request is real, it could land the provider in a court room, but it's better than giving out customer information to an attacker. He added that if the provider is also able to demonstrate that they were acting in good faith, the risk is also reduced.
Pavur also suggested that for legislators, it's important to clarify what appropriate forms of identity are and it's also critical to provide government-mediated identity verification services.
"The core point is that privacy laws should enhance privacy not endanger it," he said.
Revealing new research around the Russian dark web, Ariel Ainhoren, research team leader at IntSights told Infosecurity that local websites to Russia were a “unique part of the dark web” due to local laws and government influence.
Ainhoren pointed to several sites on the dark web, which he said “look like any other sites” and some of which are available on the surface web. He explained that the first website, hackzone.ru, was started in 1997 as there was a common Russian mentality to do things yourself. This, he said, led them to start their own discussion boards.
Another website, named Exploit.in. was started in 2005 and now has around 45,000 users. While only requiring a registration to use it, is available on the surface web. “It became an industry and became a pyramid,” Ainhoren said, saying that malware such as the Gandcrab ransomware was created on Exploit.in and distributed further via layers of middlemen.
He said: “It’s a business model. It started as a nice place to talk and switch ideas, and it is growing all of the time.”
Another website that Ainhoren showed Infosecurity had a thread with a working exploit for the Bluekeep vulnerability.
Asked if there were common rules among the users, Ainhoren said that there is an understanding of not attacking other Russians or Russian websites, or anything in the former Commonwealth of Independent States (CIS). In another case, a Syrian was hit with ransomware and after saying they were unable to pay the ransom, a filter was added so certain ransomware could not infect anyone determined to be from Syria.
“It’s an issue of nationality” Ainhoren said, saying that as we saw with the Crimea conflict, there is freedom to attack USA and European domains.
He also said that Russian authorities often turn a blind eye to these websites, and will not take them down as they “align with Russian government interest.”
He said that the Russian internet was built as a free network, and closed down over the years by a series of laws which restricted the freedom of the internet, and insisted on only using local VPNs and verifying SIM cards.
“For the dark web, it means a lot more anonymity. On one hand the government can turn a blind eye, and on the other close in on them and be more aligned with Russian interest,” Ainhoren said. “The dark web is a wealth engine that brings in money.”
The statistics for gender diversity in the industry, Lynch pointed out, are worrying. Not only is the industry not seeing positive trends in this space, but actually in many areas we are seeing worsening statistics. For example, there has been a steady decrease in women graduating with computer science degrees over the past 35 years.
Perhaps more worryingly, women exit the cybersecurity industry within a decade at twice the rate of men. Of those leaving the industry, 77% cited extreme pressure and a “hostile ‘macho’ culture” as their reasons for doing so.
Lynch blames implicit bias, amongst other things, for this trend. Examples of this are the male-orientated language used, crediting an idea to the wrong person, underestimating ability and making incorrect assumptions about someone else’s role,” she said.
There is also the stereotype threat, she explained. “There is a fear that one will fulfill existing and negative stereotypes,” said Lynch. “This is proven to increase anxiety and decrease productivity and performance.”
To counteract this, Lynch suggested an increase in visibility of women at all levels. “It’s important to convey the high value of diversity.” She also suggests mentors and sponsors providing endorsement and advocacy will make a positive difference.
“It’s a complicated problem but the solutions are simple,” concluded Lynch. “It comes down to empathy and showing up for one another.”
Speaking at Black Hat USA, Google Project Zero manager Ben Hawkes looked back at five years of the vulnerability research team and deemed the future success of the group to be focused on more groups forming.
Looking back at the formation of Project Zero, Hawkes said that there was a sense that the zero-day was a problem “for Google and society as a whole” and there has since been a shift for zero-days to be beneficial for offensive security. “So after five years, the question to ask is, is zero-day hard yet?”
Hawkes said that Project Zero was founded on principles including “good defense [which] requires a detailed knowledge of offense” and looking at the software that we rely on, not just Google Chrome and Android.
“When you think of Project Zero, autonomy comes to mind,” he added. “We are all bound by a mission and principles, and the key innovation is researchers have individual freedom to pursue their own independent research agenda.”
He explained that the research includes: 54% manual review, 37% fuzzing, and 8% other types of testing. He also said that part of performing vulnerability research is what new methodologies you can create that the researchers did not have access to previously, and by “writing an exploit, you’re walking in the shoes of an attacker.” The development of an exploit requires five steps:
- Ensure that the security impact of the bug is well understood
- Establish an equivalence class of similarly exploitable vulnerabilities
- Generate appropriate amounts of urgency
- Surfaces new and improved exploit techniques
- Allows us to find areas of “fragility” in the exploit
Hawkes said that Project Zero is in a positon “to advocate for change” and a lot of the job is spent working out “how to be an advocate and what the vendor wants to achieve.”
Looking back at some of the research, Hawkes called the work around Spectre and Meltdown as “a moment” as it changed the way we think about hardware security, and led to substantial architecture changes and marked a redoubling effort to invest in security and build up processes and testing.
“On a side note, vulnerability research has been well received and led to structural improvements” and he thanked the vendors and open source community for the work done.
Looking at how to measure the “hard” element of zero-day research, Hawkes said that you can gauge it by the number of vulnerabiltiies, or how many exploits are sold on the “grey market,” or the number of vulnerabilities debugged. “We made an attempt to find something better and more aligned,” he said.
“Instead of marketing it about zero-days being hard, we need to step back and decide what does progress towards hard mean?
“Is it hard? The truth is it is harder, but not hard. If I could stand up and say in five years we are leading to an accomplishment that would be great, but we’re not there yet.”
Hawkes also explained that open attack research “provides the best path for making zero-day hard” and there is “something compelling and powerful in doing work that teaches users to do the right things.”
Looking forward, Hawkes said that we will never finish debating on vulnerability disclosure, and this can be done well “and can be profoundly impactful, but if done poorly there can be systemic risk.” He added that he sees this as an urgent problem, and if people can be promoted and empowered and connected with external researchers, this can “create a pipeline of work that leads to collaboration.”
Concluding, Hawkes said that the way forward is for other companies to follow the Project Zero model, and create their own research teams and “expand the amount of open attack research.”
He said: “We need to focus on our mission and principles and find an area where we see eye to eye as vulnerability disclosure is a distraction, and we need to focus on the common mission and principles.”
Speaking on “Testing Your Organization's Social Media Awareness” at Black Hat USA, Jacob Wilkin, network penetration tester and application security consultant, Trustwave SpiderLabs, said that social media phishing is on the rise and is now the “preferred vector for attackers” who now spread more malware via social media than on email.
“You’re three times more likely to get click-throughs on social media, and this is important as companies move to BYOD models and people have devices at home and use social media and bring them into work environments,” he said.
Wilkin highlighted a passive testing tool that he released last year at the Black Hat Arsenal called “Social Mapper,” which allows you to “feed in a LinkedIn company name and it releases names and images of people at the company.” This will then deliver the names of employees who have been found online.
“This is less intrusive as you don’t interact with profiles, you identify them but not testing them and you don’t know if they accept connection requests or clicked on links,” he said. Instead, you get a report detailing people who are recognized as working at a company, and their corresponding social media accounts via facial recognition.
To follow up, this week he released an active testing tool called “Social Attacker,” which requires a fake social media account to be created, and log into a social media site, feed in Social Mapper results and send connection or friend requests to those people to send a phishing test message. This gives you a report at the end to see which profiles have accepted and who clicked on what, with a timestamp.
Wilkin recommended that social media users not use the same name across websites to better protect themselves, as well as not accept connections or messages from people you don’t know and, in a more extreme case, not putting a picture on your social media profile.
“As attackers pivot, it is important to raise awareness and encourage social media sites to prevent and detect attacks and review laws to consider permitting security testing,” he concluded.
In session at the Black Hat USA conference in Las Vegas, F5 Networks researchers outlined the challenges of morphing DDoS attacks and announced the release of a new open source tool called SODA in an effort to help test defenses for attack resilience.
SODA is an acronym for Simulation of DDoS Attacks and provides multiple traffic generation tools to simplify DDoS protection testing. The inspiration for SODA came from a July 2018, attack against encrypted email provider cby an aggressive form of Distributed Denial of Service (DDoS) attack that was constantly morphing its' tactics. The attack and its unique approach to disruption inspired F5 Networks researchers to figure out how to help organizations better defend themselves against the new type of DDoS.
Mudit Tyagi, Architect, Security Products, F5 Networks, explained that the attack vectors used in the Protonmail morphing DDoS attack included common attack methodology including UDP and syn floods.
"What made the attack so complex to defend against that the attacker kept on changing the attack, they kept on morphing," he said.
Tyagi added that after the Protonmail attack, his team took it upon themselves to figure out how to catch morphing attacks. The first step was to build a tool that could simulate morphing attacks, so organizations could test their own defences to see what would happen and what might be lacking. The end result of that effort is SODA.
What made the attack so complex to defend against that the attacker kept on changing the attack, they kept on morphingMudit Tyagi, Architect, Security Products, F5 Networks
"SODA can be used to put down any part of your infrastructure," explained Mikhail Federov, Product Management Engineer, Security, F5 Networks.
The SODA tool integrates a number of integrated DDoS attacks and then morphs the vector with predefined pattern and interval. On the defender or blue team side, Federov explained that the setup brings together multiple components to help simulate an environment. Among the tools is the DVWA (Damned Vulnerable Web Application),the pfSense firewall, telegraf for sending metrics, influxDB for storing the data and then finally Grafana for the dashboard. Users put the DDoS solution of their choice in front of the firewall and can then see how it is able to respond to SODA simulated attacks.
Tyagi said that what typically happens is organizations configure static vectors for DDoS response with set thresholds, for example limiting UDP traffic at a certain traffic volume. Given that morphing DDoS attacks can take aim at different resources, in his view, thresholds don't work. They also don't work because good traffic is also blocked and the potential for false positives is non-trivial.
Federov commented that simply doing anomaly detection at the network level is not accurate either and the lesson learned from testing with SODA is that there is also a need to use anomaly detection at the application level.
Tyagi added that SODA is a tool that can be used by organizations to enable bakeoffs in a way that tests resilience for morphing attacks.
"We don't care what you use for DDoS, ProtontMail got attacked and we got really charged and we wanted to help the community to defend against similar types of attacks," he said. "Whatever you use, .focus on intelligent mitigation and test your posture, we understand it's hard and that's why we give you a kit with SODA."
Researchers at ESET have discovered malware-distributing spam campaigns targeting people in France.
Dubbed Varenyky, the malicious payload comes with several dangerous functionalities. Not limited to the sending of spam, it can also steal passwords and even spy on victims’ screens while they watch sexual content online.
The first spike in ESET telemetry for this bot came in May 2019, and after further investigation, researchers were able to identify the specific malware used in the spam’s distribution.
“We believe the spambot is under intense development as it has changed considerably since the first time we saw it. As always, we recommend that users be careful when opening attachments from unknown sources and ensure system and security software are all up to date,” said Alexis Dorais-Joncas, leading researcher at the ESET R&D center in Montreal.
As explained in an ESET blog post, Varenyky first infects victims – exclusively French-speaking users in France – with a fake invoice that lures the target into providing “human verification” of the doc. From there, the spyware executes the malicious payload.
After infection, Varenyky executes Tor software, which enables anonymous communication with its command-and-control (C&C) server.
“It will start two threads: one that’s in charge of sending spam and another that can execute commands coming from its command-and-control server on the computer,” added Dorais-Joncas. “One of the most dangerous aspects is that it looks for specific keywords, such as bitcoin and porn-related words, in the applications running on the victim’s system. If any such words are found, Varenyky starts recording the computer’s screen and then uploads the recording to the C&C server,” he added.
ESET explained that, interestingly, the targets of all the spam runs observed were users of Orange S.A., a French internet service provider.
FireEye has identified a new advanced persistent threat (APT) group, dubbed APT41.
As the firm explained in a blog post, APT41 is “a prolific Chinese cyber-threat group that carries out state-sponsored espionage activity in parallel with financially motivated operations.”
The group has established and maintained strategic access to organizations in the healthcare, high-tech, and telecommunications sectors across various jurisdictions, FireEye continued, with operations against higher education, travel services and news/media firms providing some indication that the group also tracks individuals and conducts surveillance. The group’s financially motivated activity has primarily focused on the video game industry, according to FireEye.
FireEye researchers wrote: “APT41 leverages an arsenal of over 46 different malware families and tools to accomplish their missions, including publicly available utilities, malware shared with other Chinese espionage operations, and tools unique to the group. The group often relies on spear-phishing emails with attachments such as compiled HTML (.chm) files to initially compromise their victims. Once in a victim organization, APT41 can leverage more sophisticated TTPs and deploy additional malware.”
Sandra Joyce, SVP of global threat intelligence at FireEye, said: “APT41 is unique among the China-nexus actors we track in that it uses tools typically reserved for espionage campaigns in what appears to be activity for personal gain. They are as agile as they are skilled and well resourced. Their aggressive and persistent operations for both espionage and cybercrime purposes distinguish APT41 from other adversaries and make them a major threat across multiple industries.”
Researchers at NCC Group have uncovered 35 “significant” vulnerabilities in models from six popular enterprise printer brands.
It claimed to have been able to find the flaws using “basic tools,” some of which date back 30-40 years. The firm added that some bugs were uncovered within mere minutes.
They include buffer overflows, cross-site scripting, denial of service, information disclosure and other flaws as well as hard-coded credentials and broken access controls.
All of the vulnerabilities discovered have now been patched or are in the process of being fixed and system administrators are urged to update the affected models to the latest firmware.
“Because printers have been around for decades, they’re not typically regarded as enterprise IoT, yet they are embedded devices that connect to sensitive corporate networks, and therefore demonstrate the potential risks and security vulnerability posed by enterprise IoT,” argued Martin Lewis, research director at NCC Group.
“Building security into the development lifecycle would mitigate most, if not all, of these vulnerabilities. It’s therefore important that manufacturers continue to invest in and improve cybersecurity, including secure development training and carrying out thorough security assessments of all devices.”
Lewis added that corporate IT can also improve the resilience of any connected devices in the organization, by making small changes such as altering default settings, developing and enforcing secure printer configuration guides and, of course, applying regular firmware updates.
Last year, researchers found two vulnerabilities in HP all-in-one printers which could enable hackers to attack corporate networks simply by sending a specially crafted fax.
Online merchandise store CafePress has been criticized for poor incident response and cybersecurity after it emerged that over 23 million customers had their personal data stolen.
Breach notification site HaveIBeenPwned? was apparently the first many customers heard about the incident, which it said occurred in February this year.
“The exposed data included 23 million unique email addresses with some records also containing names, physical addresses, phone numbers and passwords stored as SHA-1 hashes,” it said in a brief note. The site appears to have been notified about the incident by security researcher Jim Scott.
There doesn’t appear to be any kind of notification on the official CafePress website or Twitter feed.
In fact, according to some customers who logged in to their accounts, the firm is forcing users to change their credentials but merely as part of a claimed ‘update’ to its password policy.
Stuart Reed, VP cyber at UK firm Nominet, pointed to the fact that half of the passwords in the breach were encrypted with the weak SHA-1 algorithm.
“This puts those passwords and their owners at risk not only from these compromised records but also if the passwords have been reused elsewhere. Given that the passwords have potentially been out in the wild since February, security for those affected has potentially been compromised for the past six months,” he argued.
“It is fundamental that firms identify and take action against data breaches fast. Identifying large scale exfiltration attacks, stopping the attack and keeping those affected informed as quickly as possible is the only way to successfully mitigate the impact.”
Layered security is vital, covering people, process and technology, he added.
“While two-factor authentication, not using the same passwords, and changing your passwords when a breach has happened are all good practice, there has to be more responsibility taken by breached organizations to prevent, detect and block attacks more quickly,” said Reed.
Martin Jartelius, CSO at Outpost24, argued that the firm could be in breach of GDPR rules if it has failed to respond in a timely manner and EU citizens are affected.
“It is there to decrease the risk of exposing users' private information, and most importantly it is there to ensure that if a company fails to protect users, they have the right to be informed and thereby take corrective actions,” he said.
“The bad habit of user password reuse means that while CafePress logins may be protected by the forced password reset, any re-use of passwords may lead to consequences for users. Sadly withholding this information is a very bad practice.”
Speaking at Black Hat USA in a session titled 'Deconstructing the Phishing Campaigns that Target Gmail Users,' Elie Bursztein, security and anti-abuse research lead at Google and Daniela Oliveira, associate professor at the University of Florida, said that “phishing is 45-times more dangerous than having your data exposed.”
Bursztein said that phishing is an ever-evolving target, and every day Gmail blocks over 100 million phishing emails which it categorizes into three levels of sophistication. These are: spear phishing, which is determined to be an “extreme case of sophistication and highly targeted.” In the middle there is boutique phishing, which are crafted campaigns targeted at individuals in organizations. At the bottom there is bulk phishing, typically mass campaigns spread through botnets.
“Phishing is adversarial, the attacker is shifting and messages keep being changed,” Bursztein added, highlighting a series of phishing messages from the last decade which were all different and have refined colour, shape and appearance to better avoid detection.
“Of the 100 million phishing emails we blocked, 68% had never been seen before,” he said. “It doesn’t mean that they are radically different, it just means that the adversaries have tweaked them in way so they are not exactly the same.”
He said that every day, the system has to account for two-thirds of data that it has never seen before “and this is the difficulty with phishing, where the attacker keeps changing the content.”
The research also showed that a boutique email has a lifespan of around seven minutes from when it is first seen, while a bulk campaign’s life is 13 hours.
“A phishing campaign today is very different from what we will see tomorrow, so we have to take this context and keep investing in better detection techniques,” he said.
Burzstein also pointed out that phishing is targeted, and said that those people with a business email address are 4.8-times more likely to receive a phishing email. “Why? Because phishers are selective,” he said. “Remember, they are financially motivated, so for the highest target, business email compromise is the main problem.”
He added that in order to better educate users, a yellow banner has been implemented as a “soft warning” where it could not confirm if it was phishing, so the user makes the final decision.
Oliveira said that we are “all susceptible to phishing” as phishing tricks the brain in the way we make decisions, especially with deception and detection.
She argued that user awareness is critical to making a decision, and Burzstein concluded by saying that “there is no silver bullet when it comes to defending against phishing” but he recommended using two-factor authentication and user education to help protect users, and highlighted “an ever pressing need to work on improving detections and on classifiers to deal with the onslaught of attacks.”
North Korean hackers have earned the Kim Jong-un regime in the region of $2bn after targeting banks and cryptocurrency exchanges, according to a new UN report.
The effort was likely coordinated by the hermit nation’s top military intelligence agency, the Reconnaissance General Bureau, according to the report, which was leaked to the press on Monday.
“Democratic People’s Republic of Korea cyber actors, many operating under the direction of the Reconnaissance General Bureau, raise money for its WMD (weapons of mass destruction) programs, with total proceeds to date estimated at up to two billion US dollar,” it noted.
Investigators are said to be looking at “at least 35 reported instances of DPRK actors attacking financial institutions, cryptocurrency exchanges and mining activity” across 17 countries designed to generate foreign currency.
By doing so, it is believed that they would be able “to generate income in ways that are harder to trace and subject to less government oversight and regulation than the traditional banking sector,” as well as being easier to launder.
The news comes following several missile launches by the North Korean regime in May and July, which the UN said had “enhanced its overall ballistic missile capabilities.”
Despite the high-profile meeting of Donald Trump and Kim Jong-un, the country continues to break sanctions by buying WMD-related items and luxury goods, and is enhancing its nuclear and missile program, the UN claimed.
It’s been known for a while that North Korean hackers have been targeting cryptocurrency exchanges, but until now reports were piecemeal, hiding the true scale of the operation.
As far back as 2017 there were reports of state hackers targeting a London cryptocurrency firm and low velocity cryptocurrency mining operations. Last year reports suggested they managed to steal over £31m from South Korean exchange Bithumb.
At Black Hat USA in Las Vegas, Anomali threat research team manager Joakim Kennedy explained to Eleanor Dallaway why he believes the open source movement in the cybersecurity industry will help to address the skills gap.
“One way of opening up the industry to more people is to provide good free tools accessible to everyone.” The open source movement allows people “to take the toolkits and moderate them.” This, he said, is particularly relevant to teenagers and people outside of the cybersecurity industry that may have an interest in joining. “The best way to learn is to get hold of toolkits and play with them, moderate them,” he said, explaining that his own path into the industry began as a teenager, “using whatever tools were available” and self-educating himself.
Making these open source tools available “will trigger the interest of the next generation of potential employees by giving them the tools to play with for free and get their interest. We need to get more interested people into the field and there’s a high threshold to get started.” He explained this high threshold means that the paid products and tools in the industry are very expensive. “The license price is too high.”
Anomali’s Kennedy explained that when new starters are employed without industry background, “it takes a lot of training to teach them new tools and techniques.” If open source toolkits were used in university programs, that would be helpful exposure to industry candidates, and would expose them to the tools they’ll need in future roles. “Imagine having to train new employees in Microsoft Office,” said Kennedy, to emphasize his point.The evolution in the industry means our tools have to be modified to fit what is current. That’s the benefit of open-source...Joakim Kennedy
What makes a good cybersecurity professional, explained Kennedy, is “being a good problem solver, having curiosity and a willingness to learn.” If a candidate has those qualities, they can be trained, said Kennedy.
Open source toolkits are useful for researchers, but “the market isn’t there to sell it. We write them to give back to the research community. The evolution in the industry means our tools have to be modified to fit what is current. That’s the benefit of open-source – it can evolve with the industry.”
Eco-systems are being built around open source toolkits, explained Kennedy. “A lot of paid tools allow for open source plug-ins to automate tasks. A lot of these plug-ins are being released freely to support commercial services.”
Kennedy doesn’t understand why CISOs are often reluctant to allow open source tools into their organizations. “What are they afraid of? They can audit them – which you can’t do with a propriety product. They have to put their trust in that vendor for that. With open source, they can audit it themselves.”
When asked what new threats his team are observing, he responded “threats are just evolutions of older threats. What we’ve seen in the past year has been a shift in the way ransomware is being used.” Ransomware was taken over by cryptomining but when the crypto market crashed, ransomware took over once again. “Now, however, rather than targeting the masses, ransomware attacks are more targeted and focused in their approach. Gone are the days of spam and send-to-all targets. Now they specifically target their entrance and how to get in more closely.”
“Ultimately,” concluded Kennedy, “Security is being better than your neighbors so they break into them and not you. A lot of criminals just look at low-hanging fruit, so make it as hard for them as possible.”
After the previously announced keynote speaker Will Hurd was withdrawn among criticism among the security community over his voting record, Zovi took the opportunity to focus on the “shift left” concept and how he had worked his way through events like Pwn2Own and security jobs where he had seen differing security cultures.
He said that starting his job at Square in 2014, he was able to overcome some of the collaboration problems he had seen in other jobs, and especially where there was a culture of collaboration and empathy, “as security engineers wrote code like everyone else.”
“A software team member said 'hello, security friends' and asked a question, and someone voluntarily talked to security. It took me a while to figure out what the ingredients were, and that was the transformative change for me.”
He said that when he saw this firsthand, he was critical and went to demonstrating his capabilities because “we are not insiders anymore” and we need to opportunities to demonstrate what we have learned.
To be better at security, he recommended looking at three transformative lessons:
- Work backwards from the job
- Seek and apply leverage
The first lesson is “what customers hire us for,” as agility “is important as threats change, and it is important to keep up.”
The second lesson should be about the fact that “we are still a small community and problems we tackle are huge,” Zovi said. If we have better feedback loops, he said, we can measure attacking and succeeding and consequently develop better software.
The third lesson is that culture is hard, and “ops and devs jobs are hard and to allow change, we need to allow change to happen.” He also said that it is about cultivating a culture of empathy. Instead of saying no, “say yes and how we can help” and move away from a culture of blame.
“If we do this better, it will shape our strategy and shape our tactics and have an impact on results. And that is why we should focus on generating generative cultures,” he said. “Security teams are afraid and there are good reasons to be afraid, as there is a lot of bad activity going on out there, a lot of breaches, a lot of scary things and new stuff every day. But fear misguides us, as it is irrational, and if we are afraid of tail risks we could have a deprioritization of our resources. We may focus completely on targeted zero-day attacks and completely ignore credential stuffing attacks, which are far more common and way more likely to affect most people.”
He concluded by encouraging the world “to start with yes” as it keeps the conversation going and is collaborative and constructive. “That is how we have real change and have real impact.”
The LokiBot malware continues to evolve and is now using steganography to cloak its malicious files, according to a report from Trend Micro this week.
Recently highlighted as one of the top three malware strains of 2018, LokiBot started out as a password- and cryptocurrency wallet–stealing malware on hacker forums as early as 2015, but it has evolved, according to Trend Micro. It has taken to abusing the Windows installer and updating the methods that it uses to stay on the victim's system.
Now, Trend Micro has identified a new variant of the malware that uses steganography to help hide its malicious intent. It installed itself as a .exe file, along with a separate .jpg image file. The image file opens, but it also contains data that LokiBot uses when unpacking itself.
This LokiBot variant drops the image and the .exe file into a directory that it creates, along with a Visual Basic script file that runs the LokiBot file. Its unpacking program uses a custom decryption algorithm to extract the encrypted binary from the image.
Trend Micro has seen LokiBot hiding inside image files before. In April, it reported a variant of the malware that hid a .zipx attachment inside a .png file.
Steganography has two benefits for malware authors, warned the researchers. First, it provides another layer of obfuscation, helping the malware to slip past some email security systems. Second, it provides the malware authors with more flexibility. This variant used the VBScript file interpreter to execute the malware rather than relying on the malware to execute itself. This means that the authors can change the script to alter the technique that LokiBot uses to install itself.
Steganography is becoming an increasingly common form of obfuscation for malware authors. Other notable uses of the technique include the Stegoloader backdoor Trojan, and the Vawtrak malware, which hid update files in favicons. The 2019 the VeryMal campaign also used the technique to hide malware in advertising images.
You've heard about wardriving, but what about warshipping? Researchers at IBM X-Force Red have detailed a new tactic that they say can break into victims' Wi-Fi networks from far.
The company calls the technique warshipping, and it is a more efficient evolution of wardriving, a popular technique among hackers seeking access to any wireless network they can find. Whereas wardrivers drive around a wide area with a directional antenna looking for wireless networks to crack, IBM's researchers took a more targeted approach.
Speaking at Black Hat USA, IBM researchers explained how they used off-the-shelf components costing under $100 to create a single-board computer with Wi-Fi and 3G capability. This enables it to connect to a Wi-Fi network to harvest data locally and then send it to a remote location using its cellular connection. The small device runs on a cell phone battery and easily fits into a small package.
Attackers can then send the device to a company via regular mail, where it will probably languish in a mail room for a while. During this time, it can connect to any Wi-Fi networks it finds in the building and harvest data – typically a hashed network access code. It sends this back to the attacker, who can then use their own resources (or a cloud-based cracking service) to extract the original access code. At this point, they have access to the company's Wi-Fi network.
The warship device could access the Wi-Fi network and mount a man-in-the-middle attack, impersonating a legitimate Wi-Fi access point and coaxing company employees to access it. It would then be able to harvest their credentials and other secrets, IBM explained.
The device could be programmed to wake up periodically and use its 3G network to check a command and control server for instructions on whether to begin its attack or go back to sleep. This would help preserve its battery, IBM said.
The concept works in practice, warned the company, which said: "In this warshipping project, we were, unfortunately, able to establish a persistent network connection and gain full access to the target’s systems."
Chris Henderson, global head of IBM X-Force Red, has written up the attack at SecurityIntelligence.
Researchers at the Black Hat security conference this week have revealed vulnerabilities in a leading child's tablet product.
The flaws revolved around Pet Chat, an app that lets children talk to each other in a virtual room using pet avatars and predefined phrases. The app creates a peer-to-peer Wi-Fi connection (also known as Ad Hoc mode) that broadcasts the tablet's presence to similar devices using the SSID Pet Chat.
Checkmarx researchers used WiGLE, a wireless network mapping website, to track the location of LeapPads using Pet Chat. The vulnerability would allow anyone online to find the location of a LeapPad using Pet Chat by seeking them out on public Wi-Fi or tracking the device's MAC address.
Because Pet Chat didn't require authentication between devices, anyone near a LeapPad running the app could send an unsolicited message to the child with it, potentially using the preset phrases to lure the child into danger.
The LeapPad's outgoing traffic was also unencrypted, using HTTP rather than the TLS/SSL-encrypted HTTPS, the researchers warned.
They disclosed the Pet Chat vulnerability to LeapFrog in December 2018, although the company didn't remove it until June 2019.
This isn't the first time that children have been exposed by technology that purports to help them. In February, security consulting firm Pen Test Partners discovered that cybersecurity in children's smart watches had failed to improve following a report from the Norwegian Consumer Council in early 2018. The European Commission issued a recall order for one smartwatch, called Safe-KID-One, from German company ENOX, which sent information including location history and phone numbers in the clear. Malicious users could send commands to any watch making it call another number of their choosing.
LeapFrog didn't return our request for comment by press time.
In a panel at Black Hat USA, cryptographer Bruce Schneier; Camille Francois, research and analysis director at Graphika and fellow at Harvard Law School Berkman Center; and Eva Galperin, director of cybersecurity at the EFF, talked about the benefits of technologists to society.
In a panel titled “Hacking for the Greater Good: Empowering Technologists to Strengthen Digital Society,” Francois said that the concept of technologists are not new “and not tied to the nature of Black Hat and DEFCON.” Meanwhile, Galperin talked of how the EFF’s need to add technologists was expanded in the 1990s as people “who explained things to lawyers or take on large challenges like securing endpoints,” but the role of the technologist requires a different set of skills and day-to-day work from what most companies were doing.
This is because the “notion of adversarial research is an act of public interest technology,” Schneier said, and that it is "not new to me, or new to the community.”
Schneier said that the concept of finding systems that are sold and relied on, and tested without the permission of the company or government, should be welcome as "they are evaluated and determine whether they should they be used."
“When we do this as academics or in a threat lab, we are engaging in the public interest,” Schneier said.
Francois asked about when the Edward Snowden leaks were disclosed, saying that there was a reliance on technologists to help journalists with stories. “I was called by Glenn Greenwald to look at the documents, and journalists needed associate technologists to figure out what was going on,” Schneier said.
Francois said that there is a need to better prove the capabilities of technologists who serve the public interest. Schneier said: “We are seeing a lot more groups trying to bridge technology and policy and especially our area of tech security. Some is for fame and glory, some is for funding. Technologists want to do collaboration.”
Galperin said that the EFF’s niche of human rights in technology is “now touching everyone’s lives” and as technologists become more mainstream and important, “the opportunity for misunderstanding is higher.” She said that she is finding that battles that were thought to have been won, such as backdoors in end-to-end encryption, are being re-fought.
Moss said that a lot of the talks over the past 20+ years at Black Hat had been on wanting the attention of management and political leaders and the board. Now they are listening, he questions what the industry are going to do with that.
“How we communicate really determines our outcomes, so for example now that the spotlight is on us, if we communicate well to the board you might get more budget, and if you communicate poorly to the board, you might get fired,” he said.
He asked how you communicate what “cyber” or “security” is and the language we use causes us to think of problems in a certain way and “leads in a direction we may not want to go in.”
Moss used the example of cyber being seen as the fifth domain by the military, but said that does not mean it is equal “and we are using language in a way that doesn’t fit.”
Moss said that despite being in the early days of the internet, there are going to be several defining trends, including “centralized versus decentralized”, which Moss said he believes in the latter “but there are efficiency gains in centralized.”
Moss said that we’re in a “centralization phase” and that will enable law enforcement and regulation and if the trend continues, he speculated, none of us will be surprised that we are more regulated.
“I’m a big believer that most of our problems are communications problems,” he said, saying that inDEFCON post-mortems, 80% of the problems are communications related and “totally fixable communications problems.”
Moss concluded by saying: “This gives me a lot of hope because we can fix communications problems. We are not inventing a new kind of maths, but what we have to do is reorder the way we think about things and reorder the way in which we communicate things and once we do that, you’ll see we will get completely different outcomes. Whether it is outcomes from our boss, or politicians or regulation. It is a bit of a soft skill that leads to better outcomes.”
A Pakistani man has been charged with multiple offenses after allegedly bribing AT&T staff to the tune of hundreds of thousands of dollars to help him fraudulently unlock two million customer mobile phones.
Muhammad Fahd, 34, was arrested in Hong Kong in February 2018 and extradited to the US last Friday. He’s charged with conspiracy to commit wire fraud and violate the Travel Act and the Computer Fraud and Abuse Act, four counts of wire fraud, two counts of accessing a protected computer in furtherance of fraud, two counts of intentional damage to a protected computer, and four counts of violating the Travel Act.
He is alleged to have bribed staff at the US telco giant over a five-year period ending in 2017, paying one individual as much as $428,500. Three have so far pleaded guilty to their involvement.
“Initially, Fahd allegedly would send the employees batches of international mobile equipment identity (IMEI) numbers for cell phones that were not eligible to be removed from AT&T’s network. The employees would then unlock the phones,” explained a DoJ news statement.
“After some of the co-conspirators were terminated by AT&T, the remaining co-conspirator employees aided Fahd in developing and installing additional tools that would allow Fahd to use the AT&T computers to unlock cell phones from a remote location.”
This effectively meant installing malware and unauthorized hardware on AT&T’s network so he could sell phone unlocking services to the general public, depriving the telco “of the stream of payments that were due under the service contracts and instalment plans,” according to the indictment.
Another co-conspirator, Ghulam Jiwani, was also arrested in Hong Kong but died before he could be extradited to the US. Fahd is facing a maximum of 20 years behind bars if found guilty.
“This defendant thought he could safely run his bribery and hacking scheme from overseas, making millions of dollars while he induced young workers to choose greed over ethical conduct,” said US attorney Brian Moran. “Now he will be held accountable for the fraud and the lives he has derailed.”