UK IoT Rules Come Into Effect Amid Broader Cybercrime Crackdown Targeting Crypto and AI – CCN.com

4 minutes, 32 seconds Read


Key Takeaways

  • New rules governing Internet of Things (IoT) devices have come into force in the UK.
  • The new framework requires manufacturers to create unique login credentials for each device and automatically implement security updates.
  • Meanwhile, authorities in the country continue to crack down on cybercriminals’ use of crypto and AI.

Given the rising prevalence of devices such as doorbell cameras and smart assistants, Internet of Things (IoT) security is of growing importance to consumers, as signaled by the UK’s Product Security and Telecommunications Infrastructure regime, which came into force on Monday.

The new framework  sits within wider efforts to tackle cybercrime in the UK, giving device manufacturers a legal mandate to protect users from threats,

New Rules for Smart Device Manufacturers

From remote carjacking  to compromised webcams, over the years, internet-connected personal devices have provided some of the most alarming cybersecurity incidents.

To beef up device security, the new framework requires manufacturers to create unique login credentials for each device, targeting a vulnerability hackers have been known to exploit.

Other provisions in the regulation require manufacturers to develop and implement software security updates, and where appropriate, install them automatically. If automatic updates aren’t viable, manufacturers must make them as easy as possible for users to install themselves.

But although the idea of hackers compromising personal devices is especially chilling, it isn’t the only cybersecurity threat to affect consumers in the UK. On top of the new legislative security mandate, the UK is also at the forefront of efforts global efforts to clamp down on cybercrime involving crypto and AI.

UK Crypto Regulation Enforces KYC

As in most countries, crypto regulation in the UK has sought to impede the anonymous use of cryptocurrency by requiring exchanges and other service providers to carry out Know Your Customer (KYC) measures.

By bringing the crypto sector under the banner of regulated financial services, the UK has significantly thinned out the crop of crypto exchanges able to operate in the country, leaving just a handful that meet the Financial Conduct Authority’s (FCA) standards. 

Advocates of strong KYC rules argue that they make it more difficult for criminals to use crypto to cover their tracks. 

[embedded content]

From the darknet drug trade to ransomware and hacker-for-hire services, the nebulous term “cybercrime” encapsulates all manner of illegal activity that relies on cryptocurrency transactions. 

The rise of anti-money laundering (AML) crypto regulation rests on the assumption that anonymous, untraceable transactions benefit criminals more than anyone else.

But UK authorities don’t just rely on firms adhering to strict KYC requirements to track down cybercriminals.

Operation Cronos

In recent years, LockBit has been among the most widespread cyber threats, extorting millions of dollars from organizations around the world. But thanks to the efforts of an international law enforcement crackdown led by the UK’s National Crime Agency (NCA), the ransomware gang has effectively been brought to its knees.

Authorities from 10 countries participated in “Operation Cronos,” which resulted in multiple arrests and servers in the Netherlands, Germany, Finland, France, Switzerland, Australia, the US and the UK being taken offline.

[embedded content]

Crucially, law enforcement froze more than 200 cryptocurrency accounts linked to the criminal organization while the NCA seized the darknet site they used to sell their services.

The sting neutralized a threat that has caused billions of dollars of damage and has wreaked havoc on victims for years. But just as one major cybercrime challenge has been dealt with, new ones continue to emerge.

AI and Cyber Crime

While profits from crypto scams declined by over 50% in 2023, generative AI is used in a growing number of fraudulent schemes. 

From generic investment scams to more sophisticated, targeted heists, criminals are increasingly using convincing deepfakes to trick their victims.

At the same time,  growing concerns about the rise of sexually explicit deepfakes have prompted lawmakers to consider new ways to curb the problem

As the UK prepares to regulate the space, one potential route could be to force AI developers to incorporate measures to prevent their platforms from being used for illicit purposes. 

For instance, digital watermarking could be used to ensure AI-generated content can’t easily be passed off as real, helping to inhibit its use by fraudsters.

In an interview with CCN, Steg.ai co-founder and CEO Eric Wengrowski explained how imperceptible digital watermarks can be embedded in all kinds of different AI-generated media:

“We can embed it in the pixels of an image or in the sound of an audio file in a way that people aren’t going to notice. But we want those watermarks to contain information, credentials about the content’s origin […] so that even as digital assets go through their normal lifecycle of being compressed and cropped [we can still] recover these credentials.”

As for the role of regulation, Wengrowski noted that lawmakers would be unlikely to prescribe specific algorithms for AI developers to use. But they could set standards within which platfroms would be expected to work.

“Where [regulation] can be very useful is in specifying the desired outcomes in a particular domain,” he noted. 


Was this Article helpful?



Yes



No

This post was originally published on the 3rd party site mentioned in the title of this this site

Similar Posts