Half of global organizations still don’t have cyber insurance, despite the majority believing cyber-attacks will increase next year, according to FireEye.
The security vendor polled 800 CISOs and senior executives across the globe to compile its new Cyber Trendscape Report.
More than half (56%) said they believe the risk of attacks will grow next year and 51% said they aren’t ready for an attack. Yet half claimed not to have any cyber insurance, rising to 60% in Germany.
Around one in 10 (8%) said they had no breach response plan in place, rising to 11% in the UK, 19% in Canada and 15% in Japan. Plus, 29% of those which did have response plans in place have not tested or updated them in the past 12 or more months.
This is one of the key requirements of the GDPR. Yet compliance fines appear not to be a concern to most organizations, despite the advent of the sweeping new EU legislation last year. Only a quarter (24%) of respondents said these were a concern, rising slightly to 39% in the UK, but dropping to 22% in Germany and 19% in France.
In fact, organizations are in many ways focused too much on compliance, according to Eric Ouellet, global security strategist at FireEye.
“One attitude that emerged which people should reconsider is letting compliance dictate security standards, when actually they should be aiming for a higher level of protection,” he said.
“For example, the report found that 29% of organizations had informal training programs on an ‘as needed’ basis that are focused on meeting core compliance requirements. It’s likely that the organizations which are taking a more comprehensive approach in this area and others are better equipped to deal with security threats.”
Another interesting finding from the report is the continued challenge of security awareness training. Around a fifth (21%) of German respondents lack any cybersecurity training program, much higher than the global average (11%).
The Mozilla Foundation and a group of rights groups and non-profits have penned an open letter to Facebook and Google urging them to halt political advertising until after the upcoming UK General Election.
The letter argued that there won’t be time in the current parliament for the urgent legislation on political ads that the UK Electoral Commission, Information Commissioner’s Office (ICO) and the cross-party DCMS Select Committee have called for.
“This legislative blackspot is particularly concerning in light of Facebook’s recent policies to allow politicians to openly publish disinformation through ads. Equally concerning is the lack of transparency as to what data is being used to target ads, and how such ads are being targeted,” the letter continued.
“We are aware that these policies are subject to debate both inside and outside the company. While that debate continues, people in the UK are left in uncertainty about whether they can trust what they see on the platform.”
The letter’s authors pointed to precedent in this space, with Google blocking political ads two weeks before polling in the Irish referendum and during the entirety of the recent Israeli and Canadian election periods.
“Again, this call is not about a permanent ban on political and issue-based ads; indeed, political ads are not inherently problematic. But the online advertising model, which depends on vast collection of data and opaque ad targeting systems is not fit for purpose and thus fundamentally undermines trust in political advertising,” it concluded.
“It is a request to take temporary measures to ensure that your platforms are not complicit in exploiting electoral laws MPs themselves have described as ‘unfit for purpose’.”
Mark Zuckerberg has come in for heavy criticism of late for effectively defending the right of politicians to lie in their ads, saying: “I don't think most people want to live in a world where you can only post things that tech companies judge to be 100% true.”
Facebook rejected a request from Presidential hopeful Joe Biden to remove a Trump campaign ad containing misinformation about the former Veep.
Last month, Twitter stepped up the pressure on Facebook by announcing a ban on political advertising on its platform. However, experts argued that Twitter doesn’t host many political ads anyway, and the move would do nothing to stem the flow of misinformation ahead of elections coming from bot accounts.
An Indian ed tech provider suffered a serious data breach months ago impacting hundreds of thousands of customers, but is only now informing them of the incident.
Vedantu offers a real-time online learning environment for teachers and students from its headquarters in Bengaluru.
However, it was hit by an attack back in July that exposed the personal data of 687,000 users, according to breach notification site HaveIBeenPwned?
“The JSON formatted database dump exposed extensive personal information including email and IP address, names, phone numbers, genders and passwords stored as bcrypt hashes,” the note explained. “When contacted about the incident, Vedantu advised that they were aware of the breach and were in the process of informing their customers.”
Reports suggest that the culprit may have been an exposed MongoDB instance, although this has yet to be confirmed.
Although the passwords appear to have been encrypted, there’s plenty of other personal information in the breach that could give the hackers an opportunity to craft convincing follow-on phishing attacks and identity theft attempts.
Ray Walsh, digital privacy advocate at ProPrivacy, said it’s a concern the breach wasn’t discovered earlier by Vedantu.
“What’s more, because phone numbers were stolen along with names and addresses, it is possible that users could have fallen victim to phone scams designed to steal their money — or perhaps even a SIM swap attack that could have resulted in the dual-factor authentication for their online accounts, or perhaps even their internet banking, being compromised,” he added.
“Any user who believes they have been affected by this data breach is advised to keep a close eye on any emails, messages, or phone calls they receive that could be using data stolen from Vedantu to coerce them into parting with further data or clicking on malicious links.”
A Pentagon advisory board has published a set of guidelines on the ethical use of artificial intelligence (AI) during warfare.
In "AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense," the Defense Innovation Board (DIB) shied away from actionable proposals in favor of high-level ethical goals.
In its recommendations, the board wrote that the Department of Defense's AI systems should be responsible, equitable, traceable, reliable, and governable.
Since AI systems are tools with no legal or moral agency, the board wrote that human beings must remain responsible for their development, deployment, use, and outcomes.
As far as being equitable, the board wrote that the Department of Defense (DoD) "should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons."
To ensure AI-enabled systems are traceable, the board recommended the use of transparent and auditable methodologies, data sources, and design procedure and documentation.
The board recommended that the DoD's AI should be as reliable as possible, and because reliability can never be guaranteed, that it should always be governable. That way, systems "that demonstrate unintended escalatory or other behavior" can be switched off.
The board called for ethics to be an integral part of the development process for all new AI technology, rather than an afterthought.
"Ethics cannot be 'bolted on' after a widget is built or considered only once a deployed process unfolds, and policy cannot wait for scientists and engineers to figure out particular technology problems. Rather, there must be an integrated, iterative development of technology with ethics, law and policy considerations happening alongside technological development," wrote the board.
Although the public sector, including the European Commission, the United Kingdom House of Lords, and ministries or groups from the governments of Germany, France, Australia, Canada, Singapore, and Dubai have all formulated AI ethics or governance documents, the US is unique in offering AI guidelines specific to the military.
"What is noteworthy when canvassing the plethora of available AI Ethics Principles documents is that there is no other military in the world that has offered its approach to ethical design, development, and deployment of AI systems. In this respect, DoD is leading in this space, showing its commitments to ethics and law" wrote the board.
Since DIB's recommendations are not legally binding, it is now up to the Pentagon to decide if the board's guidelines should be followed.
America's Midwest is to get its first National Guard cyber battalion.
The 127th Cyber Battalion will comprise 100 soldiers, who will be based in Indiana. Before taking up their new command, the soldiers will head to the Muscatatuck Urban Training Center in Jennings County, where they will receive state-of-the-art training in cybersecurity and cyber-warfare.
Located 75 miles southeast of Indianapolis, the center features live environments for cyber- and electronic warfare testing and training. The soldiers will be challenged to neutralize attacks in realistic simulations of incidents that have occurred in the past and attacks that could be launched in the future.
Additional training will be provided to the soldiers by Ivy Tech Community College Cyber Academy at Muscatatuck.
"With our National Guard's current cyber resources and Indiana's top-notch academic institutions, our state is a natural fit for one of the country's first cyber battalions," Indiana governor Eric Holcomb said in a statement.
"Warfare is becoming increasingly digital, and it's an honor for Indiana to be home to those who protect our country from computer-generated threats."
Indiana beat nineteen other states and territories to become the battalion's new home. Officials chose the Hoosier State for its existing cyber capabilities, partnerships with industry and academia, and its proven ability to recruit and retain soldiers.
The 127th Cyber Battalion is the Army National Guard's fifth cyber battalion. Two battalions are already up and running in Virginia, and South Carolina and Massachusetts each have one.
Indiana's new battalion is expected to attain its full operational capability by 2022. The 127th will serve under the Army National Guard's 91st Cyber Brigade, which was established in 2016 in Virginia.
Most of Indiana's new battalion of cyber-soldiers will serve part-time on top of pursuing civilian careers. Once qualified, they will offer cybersecurity expertise to companies, providing training readiness oversight to conduct cyberspace operations, network vulnerability assessments, security cooperation partnerships, and FEMA support along with cyberspace support of federal requirements.
“The Army National Guard’s role in national cybersecurity provides a larger blanket of protection against our adversaries,” said Lt. Gen. Daniel R. Hokanson of the Army National Guard.
A malicious Android app that displays advertisements and facilitates the download of additional malicious apps has infected over 45,000 devices in six months.
Researchers at Symantec observed a surge in detections of the Xhelper app, which has mainly been targeting users in the US, India, and Russia.
This annoying app, which bombards infected devices with pop-up advertisements, is tricky to find because it has been designed to not appear on the system's launcher.
In addition to playing an irritating game of hide and seek, Xhelper has proved to be more tenacious than a 5-year-old in a candy store by repeatedly reinstalling itself on devices from which it's been removed and even on devices that have been restored to their factory settings.
Researchers wrote: "We have seen many users posting about Xhelper on online forums, complaining about random pop-up advertisements and how the malware keeps showing up even after they have manually uninstalled it."
With no app icon visible on the launcher, Xhelper can’t be launched manually. Instead, the malicious app gets its green lights from external events, leaping into action when a compromised device is rebooted, an app is added or removed from the device, or the device is connected or disconnected from a power supply.
The launched malware has cunningly been designed to register itself on the device as a foreground service, lowering its risk of being quashed when the device's memory is low.
"For persistence, the malware restarts its service if it is stopped; a common tactic used by mobile malware," wrote researchers.
Once Xhelper has settled into the device's lounge and popped its feet up on the coffee table, it begins decrypting to memory the malicious payload embedded in its package. The payload then connects to the threat actor's command and control (C&C) server and waits for commands.
"Upon successful connection to the C&C server, additional payloads such as droppers, clickers, and rootkits, may be downloaded to the compromised device. We believe the pool of malware stored on the C&C server to be vast and varied in functionality, giving the attacker multiple options, including data theft or even complete takeover of the device," wrote researchers.
Symantec first spotted Xhelper back in March 2019 when it was visiting advertisement pages for monetization purposes. Since then, the malicious app's code has become more sophisticated, and researchers "strongly believe that the malware’s source code is still a work in progress."
Extending its data loss prevention (DLP) capabilities with the acquisition of the insider threat management provider, Proofpoint said that the combination of ObserveIT’s lightweight endpoint agent technology and data risk analytics with Proofpoint’s information classification, threat detection and intelligence, will offer “unprecedented insights into user activity with their sensitive data.” The transaction is expected to close in the fourth quarter of 2019.
ObserveIT’s insider threat management solution enables security teams to detect, investigate, and prevent potential insider threat incidents by delivering real-time alerts, and actionable insights into user activity in one solution. Set to be integrated with Proofpoint’s information protection suite, this will deliver real-time detection of the anomalous interactions across people, data, devices, and applications allowing security teams to understand and respond to data being mishandled, whether on a corporate device, in a cloud app like Office 365, or via email.
“Today’s ObserveIT acquisition underscores Proofpoint’s commitment to providing organizations with people-centric cybersecurity and compliance solutions that protect what matters: their people and the data they have access to, in a post-perimeter, cloud-first world,” said Gary Steele, chairman of the board and chief executive officer of Proofpoint.
“Defending data requires the ability to detect risky insider threat behavior and risky user activity, and swiftly mitigate risk across cloud apps, email, and endpoints. We are the only security company that provides organizations with deep visibility into their most attacked people—and with ObserveIT, we will bring to market the first truly innovative enterprise DLP offering in years. We are thrilled to welcome ObserveIT’s employees and customers to Proofpoint.”
Mike McKee, CEO of ObserveIT, said that Proofpoint’s leadership in people-centric cybersecurity, broader intelligence and R&D resources “are significant market differentiators and directly complement our ability to quickly detect insider threats and prevent critical information loss.”
McKee added: “We are very excited to join the Proofpoint team and provide customers with even more powerful solutions to mitigate insider threats, decrease incident investigation time, and make sure users don’t intentionally or accidentally send valuable, confidential information externally.”
The US government will soon partially relax its block on Huawei by allowing domestic tech firms to sell it components, according to the Commerce Department.
Although Donald Trump in June signaled a softening of Washington’s hardline approach to the Chinese giant, when he said he’d allow some US firms to start supplying the company again, the all-important licenses have still not appeared.
Commerce secretary Wilbur Ross said on Sunday that these “will be forthcoming very shortly,” according to Bloomberg.
This will help US firms which have seen rival companies in Asia pick up lucrative contracts to sell Huawei various components, after Trump approved a decision to put the Shenzen firm and 70 affiliates on an “entity list.”
It’s telling that the Commerce Department has already received 260 requests from US firms for licenses to circumvent Huawei’s blacklisting.
“That’s a lot of applications. It’s frankly more than we would’ve thought,” Ross reportedly said. “Remember too with entity lists there’s a presumption of denial. So the safe thing for these companies would be to assume denial, even though we will obviously approve quite a few of them.”
Huawei has subsequently been joined on the entity list by over 20 other Chinese firms, including AMD joint venture partner Tianjin Haiguang Advanced Technology Investment Company, surveillance tech giants Hikvision and Dahua Technology, and supercomputer builders Sugon and the Wuxi Jiangnan Institute of Computing Technology.
US firms are also fearful of a reprisal from China, which could put them on a tit-for-tat blacklist, making it difficult to sell their wares in the giant eastern market.
For its part, Huawei has been bullish about its growth prospects, despite the intense pressure from Washington, which has also barred it from competing in the US telecoms market.
It denies all claims of being a US national security risk and still hopes to be the world’s leading smartphone maker by volume by 2020
Media giant Nikkei has become the latest firm to suffer a humiliating Business Email Compromise (BEC), after it admitted losing $29m to scammers following human error.
The Tokyo-headquartered firm, which owns the Financial Times, revealed in a brief statement that an employee of its US subsidiary made the crucial mistake.
“In late September 2019, an employee of Nikkei America, Inc. … transferred approximately $29m Nikkei America funds based on fraudulent instructions by a malicious third party who purported to be a management executive of Nikkei,” it noted.
“Shortly after, Nikkei America recognized that it was likely that it had been subject to a fraud, and Nikkei America immediately retained lawyers to confirm the underlying facts while filing a damage report with the investigation authorities in the US and Hong Kong. Currently, we are taking immediate measures to preserve and recover the funds that have been transferred, and taking measures to fully cooperate with the investigations.”
Nikkei follows a long line of big-name organizations which have been caught out over recent months and years.
Most notably, tech giants Facebook and Google were both tricked into making huge money transfers, of $99m and $23m respectively — although those attacks appear to have been more sophisticated than the one affecting Nikkei.
BEC scammers are also looking to take a leaf out of the ransomware playbook by targeting US municipalities.
The City of Ocala in Florida is said to have lost $742,000 after an official was tricked by a spear-phishing email. The message was sent by an attacker posing as an employee of a building firm the authority is currently using to construct an airport terminal.
When the real construction company complained that an invoice had not been paid, the alarm was raised, according to local reports.
BEC cost global organizations $1.3bn last year, almost half of total losses reported to the FBI.
A global internet registrar with millions of customers has admitted suffering a data breach in August which exposed user account information.
US-based Web.com, and subsidiaries Network Solutions and Register.com, discovered on October 16 that they were hit by an attack late in August.
“Our investigation indicates that account information for current and former Web.com customers may have been accessed,” the firm said in a statement.
“This information includes contact details such as name, address, phone numbers, email address and information about the services that we offer to a given account holder. We encrypt credit card numbers and no credit card data was compromised as a result of this incident.”
The firm said it brought an independent cybersecurity firm on board “immediately” after discovering the unauthorized access, in order to determine the scope of the incident and what data was affected.
“We are notifying affected customers through email and via our website, and as an additional precaution are requiring all users to reset their account passwords,” it added.
Although credit card numbers are encrypted in line with PCI DSS standards, Web.com urged customers to keep an eye on card activity.
However, the other stolen information could put customers at risk of follow-on phishing and identity fraud attempts.
Network Solutions is the fifth largest registrar in the world, with almost seven million accounts to its name, although it’s unclear how many were affected by this incident.
Matthew Ulery, chief product officer at SecureAuth, argued that the attack highlights the need for more streamlined, intelligent authentication security to protect employee accounts.
“Attackers are simply walking through the front door of enterprises, gaining unauthorized access and looting PII, further exacerbating the identity security crisis. This attack is a major wake up call for organizations to improve their identity security approach,” he added.
Working environments designed to empower only men are putting women off pursuing cybersecurity careers.
Cybersecurity professionals speaking at the (ISC)² Security Congress held in Florida this week revealed that talented women are taking their skills elsewhere because cybersecurity made them feel unwelcome.
Deidre Diamond, founder and CEO of recruitment company CyberSN, said: "We’ve heard for years now that women feel they have to work twice as hard and prove themselves to be technical on a regular basis.
"I’m seeing women leaving the more technical roles and going into the risk and law and privacy roles, where there’s more women already and they don’t have to continually prove themselves."
Diamond, who founded BrainBabe to promote diversity in cybersecurity, said women are being passed over for promotion, talked over in meetings, and asked about childcare arrangements in interviews. Another big problem is men taking credit for work that was actually completed by women.
"Who doesn’t want to work somewhere where it’s comfortable and they feel respected? If that’s not happening, then that’s just tragic, particularly in this day and age, in these jobs, in the United States of America. We don’t have inclusive cultures at the mass level," said Diamond.
Crystal Williams, information security certification and accreditation manager at Women's Society of Cyberjutsu, said: "A lot of the women I have worked with have left the cybersecurity field because of the male domination, but their self-esteem and their confidence level in themselves was very low. And there are men out there that see that they have that insecurity who will play on it.
"A lot of those women leave the field and never come back, and they could have been phenomenal programmers or analysts."
According to Sarah Lee, founder of K12 computer science and cybersecurity outreach program Bulldog Bytes, the key to making women confident in their technical abilities is to teach them essential skills at a young age in a single-sex environment.
Lee said: "We mostly provide gender-specific workshops as there's a lot of literature that says we need to keep the girls in a separate group. When we are engaging them with technology, the boys tend to take over and tell them what to do."
Agreeing with this approach, Williams said: "One of the reasons for separating the girls out is they need to build their confidence level and their self-esteem before we introduce them into an environment where they already feel like they don't belong.
"Once they develop that confidence level then you don't have to worry about them being able to function in a workforce that is male-dominated because with their skillset they know 'I can do this.'"
Achieving gender equality in cybersecurity requires work on both sides, according to Diamond.
"It’s not that there’s something wrong with men or with women, it’s about showing the problem. If 90% of the marketplace are men, then men are going to be the ones to make the change. They have to be the ones to empower women. There are plenty of them who care and yet never saw the problem, but awareness is better now thanks to #MeToo."
Describing what women can do to improve the situation for themselves, Diamond said: "Women need to support each other more and play the game of capitalism. They need to show up prepared to be taken seriously and understand what that means in whatever environment they are in. There are a lot of things that we can do with our language and our approach that will help us."
A structured transdisciplinary approach could be the key to successfully engaging children in cybersecurity.
Participants in a STEAM education panel held at the ninth annual (ISC)² Security Congress called for cybersecurity to be taught as an integral part of all disciplines rather than as its own separate subject.
"People see cybersecurity as a separate content area. I think that it needs to be part of every content area, like reading, writing, and arithmetic," said Sarah Lee, a panelist and assistant department head and director of undergraduate studies at Mississippi State University.
"We are training teachers to integrate computational thinking and cybersecurity awareness and concepts into their classrooms, and that's working very well."
Panelists shared their experiences of integrating the arts into technology and cybersecurity workshops in a way that made the lessons culturally relevant for students. Successful methods included capturing students' imaginations through dance, likening dance to an algorithm, and then tasking students with programming robots to dance.
Panelist Crystal Williams described how the already challenging task of engaging youngsters in STEAM workshops was made harder by the disparity in their educational backgrounds.
"In my workshops I have children that are home-schooled and children from the public-school system," said the information security certification and accreditation manager at Women's Society of Cyberjutsu.
"The girls who came from the public-school system had a very hard time with measurements. It took time to get them engaged because I had to go back and teach them the fundamental skills of fractions.
"The children in the public system had no type of arts education and struggled with creative thinking, whereas the home-schooled children, and kids from special schools, have got that foundational skill."
Dr. Anna Wan, founder and director of the Eagle Maker Hub, who also runs the Hackability summer camp for high school teenagers with physical disabilities, said she had found a "multiple points of entry" approach effective when teaching math.
The tactic of seamlessly weaving the core skills involved in one subject into lessons on other topics could work for cybersecurity and technology too, if deployed intelligently.
"Looking at Arts and STEM, you can't have STEAM without the 'A', but don't do it haphazardly," said Wan. "If you are just saying there's some paint involved so it's a STEAM activity and you're not looking deeper into the art concepts then you're not really bringing in the 'A'."
The recruiting methods being used in the cybersecurity industry are so dire that they pose a national security threat.
In an exclusive interview with Infosecurity Magazine at the (ISC)² Security Congress in Orlando, Florida, the founder and CEO of cybersecurity research and staffing firm CyberSN and of BrainBabe, Deidre Diamond, described recruitment in cybersecurity as "a crisis in a crisis in a crisis."
Diamond said: "The way we look for jobs is broken. Our professionals aren’t happy. They don’t love their jobs, but because job searching is so bad, they settle and stick around longer in those jobs.
"Having unhappy employees is an insider threat. I believe it’s a national security issue."
According to Diamond, the difficulties stem from a general ignorance of the scope and variety of jobs available in cybersecurity, coupled with the absence of a shared terminology to describe the many skill sets at play within the industry. Also, chronic under-investment by businesses in their cybersecurity means many cybersecurity professionals are doing three jobs in one.
"Cybersecurity isn’t one job; it’s 35 different job categories and 111 titles," said Diamond.
"On top of the changing and growing roles in cybersecurity, we don’t have the common language it takes to figure out the job, or to figure out what the professional really knows. We don’t know how to sell cybersecurity correctly to anybody, never mind to diverse candidates."
To mitigate the problem, Diamond invested billions in building question-and-answer technology that allows recruitment to be carried out in a different, skills-based way.
Describing the recruitment method practiced by her company, Diamond said: "Resumes don’t matter, and job descriptions don’t matter. We start from scratch, and we ask our own questions, then we build somebody’s profile and we build a job description."
Candidates give a baseline job title they have held and are then asked to list the tasks and projects they have been working on, detailing their functional roles and what percentage of their time is taken up by each role. They are then matched to jobs based on what functional roles are required, taking into account other factors such as salary, location, and remote working options.
Breaking down a job to show the percentage of time spent on each function can widen the number of opportunities open to candidates who might be put off by a job spec that relies on words alone.
Diamond said: "Somebody doing 50% of their time being an analyst and 50% of their time doing incident response could still be interested in an 80:20 split, or in a 40:40 plus something new like malware. That could still be a fit, because malware is really only 20% of the job, and humans are smart and they can learn. When the job is presented correctly, we can make those matches."
Two North American men have pleaded guilty to hacking and extorting Uber and LinkedIn’s Lynda.com business, compromising data on tens of millions of users in the process.
Brandon Charles Glover, 26, of Winter Springs, Florida, America, and Vasile Mereacre, 23, of Toronto, Canada, pleaded guilty to one charge each of conspiracy to commit extortion involving computers. They will likely face a five-year stretch in jail and fine of $250,000 as a result.
The two are said to have used a custom-built GitHub account checker tool to try a number of already breached corporate credentials and see if they unlocked accounts on the developer site. After accessing several accounts belonging to Uber employees, they found AWS credentials which unlocked the online taxi firm’s AWS S3 data stores.
Using an encrypted ProntonMail address, they then contacted Uber’s CSO, claiming to have found a vulnerability in its systems and demanding payment in return for deletion of the compromised customer and driver data — which ran into 57 million records.
Uber eventually agreed, paying them the requested $100,000 in Bitcoin through its HackerOne account and then covering up the incident, until a new CEO decided to come clean in 2017.
Emboldened by their success, Glover and Mereacre then obtained access to 90,000 Lynda.com accounts via the online education firm’s AWS S3 account, and tried the same extortion trick, according to court documents.
However, this time the firm went public with the breach.
The two incidents almost read like a case study in the right and wrong ways to handle a breach-related extortion demand.
In the case of Uber, it ended up settling with the US government to the tune of $148m, whilst paying a £385,000 fine to the UK’s Information Commissioner’s Office (ICO). It’s lucky to have escaped the wrath of GDPR regulators, given that 2.7 million British customers and drivers were affected by the breach.
Twitter has announced a ban on political advertising ahead of crucial elections in the UK and US over the coming year, turning up the heat on Facebook to tackle micro-targeting campaigns on social media.
At Infosecurity Europe earlier this year, author Jamie Bartlett warned that elections will increasingly be fought online, with small groups of swing voters micro-targeted by personalized ads. This strategy threatens to undermine the legitimacy of results, he argued, and could be further tainted by dubious use of private data, as per the Cambridge Analytica scandal.
Across several posts on the social platform he co-founded, CEO Jack Dorsey, explained that the firm’s final policy would be published on November 15 and enforced a week later.
“We’ve made the decision to stop all political advertising on Twitter globally. We believe political message reach should be earned, not bought. A political message earns reach when people decide to follow an account or retweet. Paying for reach removes that decision, forcing highly optimized and targeted political messages on people. We believe this decision should not be compromised by money,” he said.
“While internet advertising is incredibly powerful and very effective for commercial advertisers, that power brings significant risks to politics, where it can be used to influence votes to affect the lives of millions.”
Although tacitly admitting that the decision would probably have a minimal impact on the firm, given its relatively minor role in a much larger political advertising ecosystem, Dorsey couldn’t resist piling the pressure on Facebook.
“For instance, it‘s not credible for us to say: ‘We’re working hard to stop people from gaming our systems to spread misleading info, buuut if someone pays us to target and force people to see their political ad…well...they can say whatever they want!’,” he argued.
Dorsey also called for “more forward-looking” political advertising regulation, although admitting this would be difficult to craft.
The news was welcomed by non-profit the Open Knowledge Foundation, which called on Facebook to follow suit.
“It will go a considerable way to preventing the spread of disinformation and fake news, and help to resuscitate the three foundations of tolerance, facts and ideas,” argued CEO, Catherine Stihler.
“It is imperative that we do not allow disinformation to blight this year’s UK General Election, forthcoming elections across Europe, and next year’s US Presidential election. Facebook must act on the growing demands for greater transparency.”
Socialbakers CEO Yuval Ben-Itzhak also praised the move as part of Twitter’s efforts to clean up its platform.
“By banning political advertising on the platform, Twitter's leadership is taking an important stance,” he added.
“Validating each ad at scale is technically challenging to say the least, so by banning politically-motivated ads the platform stands a better chance of remaining digital pollution-free for its advertisers and users.”
However, Tom Gaffney, security consultant at F-Secure, argued that the real problem for Twitter is fake accounts which are used to amplify often extreme views and misinformation, and trolling, which can also be used to spread rumors.
“Since many fake and troll accounts are controlled at least partially by real people, it is very difficult to create algorithmic methods to detect them” he concluded.
“Despite Twitter’s own efforts, it is clear that the platform is still burdened by the presence of fake accounts and that many manipulation tactics are still very viable. In order to build better detection methods, more research is needed to understand how the people behind these accounts operate.”
The next generation of cybersecurity specialists must look at ways to ensure better security for our whole lifetime.
Speaking at Bsides Belfast, Duo Security advisory CISO Wendy Nather looked at the concept of “how do we live securely from cradle to grave.” In her closing keynote, Nather recalled the efforts she had to go to educate her family on internet use, and her parents on gaining power of attorney over their estate.
“We are conditioned by the interface and this can be exploited and leveraged against us,” she said, explaining that some of us have only used computers at work, and now we are “exposed from birth” and are given accounts from school to college to work. “We get more logins and government accounts and bank accounts, and online shopping,” she said, adding that “the stupidest thing we did as technologists” was to determine that credentials can be stored in the brain.
“As you get old you get incapacitated, and people may be disabled and may need assistance – how can you let someone run your life for you?” Nather asked, arguing that this is something we have to think about now, and that this is something we need that “goes across all accounts from birth to death.”
She called on delegates to consider this, and to create an “intermediary to cover the digital lifespan.” She praised the uptake of password managers and Webauthn “and the emerging root of trust that is the phone,” but this is expensive and fragile, “and not what we need to cover our entire lifespan.”
This has “got to be more than authentication, and help with security decisions like delegating tasks” and it needs to be granted and revocable, and work with everything you have and be age appropriate to start at school.
“More than identity, we need something that encompasses regulations across the globe and is regulated by a trusted entity with no other agenda than providing this service – it cannot sell data or promote anything else – and it has got to work at speed.”
Nather concluded by urging the audience to do this, adding that this is “the greatest challenge of our generation.”
The UK’s privacy watchdog has raised “serious concerns” about police use of facial recognition technology, and called for the introduction of a statutory code of practice to govern when and how it should be deployed.
There have been numerous complaints from lawmakers, rights groups and members of the public in the past about how police are using the technology in public spaces, with many arguing that trials are being run covertly and that those members of the public covering their faces are assumed to be hiding something.
Big Brother Watch released a report last year claiming live facial recognition (LFR) systems being used by the Met police are 98-100% inaccurate.
Information commissioner Elizabeth Denham argued in a blog post yesterday that the ICO’s investigation into LFR use by the Met and South Wales Police had raised “serious concerns about the use of a technology that relies on huge amounts of sensitive personal information.”
“We found that the current combination of laws, codes and practices relating to LFR will not drive the ethical and legal approach that’s needed to truly manage the risk that this technology presents,” she added.
Denham argued that a recent court ruling in which a judge said South Wales Police force’s use of LFR was lawful, should not be seen as a blanket authorization.
Instead, police forces across the country must follow her first Commissioner’s Opinion, announced yesterday.
This stipulates that police must follow current data protection laws — de facto the GDPR — during trials and full deployment, and that the use of facial images constitutes “sensitive processing” under this legislation. This applies whether an image produces a match on a watchlist or if it is subsequently deleted.
Data controllers must identify a lawful basis for the use of LFR and data protection laws apply to the whole process — “from consideration about the necessity and proportionality for deployment, the compilation of watchlists, the processing of the biometric data through to the retention and deletion of that data.”
In short, the ICO is telling UK police to slow down in their use of LFR, and ensure it is justified and done lawfully.
The watchdog said it would be working with the relevant authorities to produce a statutory and binding code of practice issued by the government on LFR use in public places.
In the US, there has been something of a backlash against LFR of late, with local authorities implementing bans on its use.
Supply chain attacks continue to be a reality for businesses, and are easier for adversaries.
Speaking at Bsides Belfast 2019, Cisco Talos security researchers Edmund Brumaghin and Nick Biasini explained that supply chains begin with a raw material that goes to a supplier, a manufacturer and distributor, and with so many people involved in the process, it is easy for an attacker to step in.
Highlighting cases from the past, including the Gunman project, which revealed the first keylogger created by the Russians in typewriters in US embassies in “the first known interdiction attack.”
“Hardware attacks today don’t exist, and there are reasons for it,” Biasini said, highlighting circuit boards that have chips and traces, and the hundreds and thousands of people whose job it is to strip chips and layers that it would be “extremely difficult and noisy” to compromise one device, and an attacker would need to interfere with all devices on an assembly line.
Looking at software supply chain attacks, Brumaghin said that this is more of a soft target, and pointed at the NotPetya attack, as it compromised the Ukranian M.E.Doc software, as well as the Ccleaner compromise, where the software was targeted with a malicious version made available as a download.
There are also more current cases, such as altered code in Webmin and PHPear, while Biasini said that “a gigantic target” exists in browser extensions as an attacker “can hit a huge amount of systems and do click fraud with little difficulty.”
Biasini also said that open source has become a massive target, as adversaries realize that they do not need to compromise different systems and can focus on anywhere, writing and sharing code. He also called advertising networks “a disaster as so many systems, domains and processes can be infected along the way.”
In terms of defense, they recommended “covering all of the bases,” including:
- Asset identification
- User access control
- File access control
- User education
- Threat hunting
Also, the speakers advised to document and validate all network connections, document data sent from the client and “scrutinize incoming network connections” and push security to vendors, “as controls don’t just apply to your environment anymore.”
Biasini concluded by stating that supply chains are where an adversary can come in as “if they cannot get in via the front door, they will come in via the supply chain.”
Retired US Navy four-star admiral William McRaven offered guidance on how to succeed in life as he delivered the closing keynote address at the (ISC)² Security Congress in Orlando, Florida.
Drawing from memories of his exceptional 37-year military career, McRaven encouraged the rapt crowd to embrace teamwork, take risks, and be prepared to fail if they want to reach their goals.
McRaven played a key role in thousands of dangerous overseas missions, overseeing the capture of Saddam Hussein and the raid that resulted in the death of Osama bin Laden.
Speaking at the security conference on Wednesday, McRaven shared a number of lessons drilled into Navy SEALs as they go through their almost inconceivably tough initial training, such as "Life's not fair; get over it."
After advising attendees to start every day by making their bed, McRaven said: "Making the bed is recognizing that the little things in life matter. If you can't even make your bed, how are you ever going to lead a complex mission?"
McRaven told the crowd how he broke his back and pelvis in a parachute accident that occurred during a 1,000-foot freefall exercise in the summer of 2001. During the long months of recuperation that followed, McRaven was kept in good spirits through his family's loving care and the frequent visits he received from friends and colleagues.
"Make as many friends as you can, have as many colleagues as you can, and take care of as many strangers as you can, as someday they may come back and take care of you," McRaven advised the audience.
McRaven implored the crowd to never miss the opportunity to inspire someone, because it can have a cascading effect. He shared a particularly uplifting story from his own life, which occurred when he met a young man in the 25th Infantry Unit.
The soldier had recently returned from Iraq after an Explosively Formed Projectile (EFP) entered the vehicle in which he was traveling. The vehicle's other occupants all lost their lives that day. The young soldier lost all four of his limbs.
As McRaven tried to think of something to say to the quadriplegic soldier, he found himself battling with feelings of pity and remorse. To his amazement, the young man said to him: "Sir, I'm 24 years old. I'm going to be just fine."
"I never forgot that," said McRaven. "That young man, that day, inspired me a way that few people have."
Cybersecurity's leading lights were recognized at an award ceremony held yesterday in Orlando, Florida.
The special event, which took place at the Walt Disney World Swan and Dolphin Resort on day three of the (ISC)² Security Congress, was staged to honor the winners of the 2019 Information Security Leadership Awards (ISLA®) Americas.
The ISLA Americas awards recognize outstanding leadership and achievement in workforce improvement among information security and management professionals throughout the private and public sectors in North, Central, and South America.
To be in the running for the award, cybersecurity professionals must have inspired change within the cybersecurity field. Only individuals working in the private and public sectors throughout the Americas, but outside of the U.S. federal government, are eligible for the prestigious accolade.
"Each year, the ISLA Americas ceremony showcases what leaders in our field are doing to help us achieve our vision of inspiring a safe and secure cyber world," said (ISC)² chief operating officer Wesley Simpson.
"The winners are enabling positive change within their organizations and communities, and across the industry."
Tomiko K. Evans picked up an award for Up-and-Coming Information Security Professional for introducing CyberRap to cybersecurity conferences. Evans is CEO and owner of unmanned aerial vehicles (UAV) cybersecurity firm Aerial Footprint, and vice president of information security at Palo Alto Networks.
CISSP Andrés Velázquez took home an ISLA Americas award for Community Awareness for his Crimen podcast. Velázquez is the founder and president of MaTTica, the first forensic lab in the private sector in Latin America.
Another CISSP, Anna Harrison, scooped up the gong for Information Security Practitioner for her efforts to strengthen the nation’s cybersecurity. Harrison, who holds a master's degree in computer science from Mississippi State University, is senior cybersecurity engineer at veteran-owned Alabama business H2L Solutions.
The winner of the Senior Information Security Professional award was Cassio Goldschmidt, CSSLP, CAP, and head of information security at HVAC software creators ServiceTitan. Goldschmidt earned the accolade for his work in end-to-end security policy enactment and awareness.
This year, a judging committee comprising five seasoned industry professionals representing both North America and Latin America reviewed the nominations and selected the winners based upon specific criteria and eligibility requirements.