Author: Tronserve admin
Monday 26th July 2021 10:26 PM
The 7 Most Dangerous Technology Trends In 2020 Everyone Should Know About
As we enter new frontiers with the latest technology trends and enjoy the multiple positive impacts and benefits it can have on the way we work, play and live, we must always be conscious and get prepared for possible negative impacts and potential misuse of the technology. Here are seven of the most dangerous technology trends:
1. Drone Swarms
The British, Chinese, and United States armed forces are examining how interconnected, cooperative drones could be used in military operations. Impressed by a swarm of insects working together, drone swarms could reinvent future conflicts, whether it be by overwhelming enemy sensors with their numbers or to effectively cover a large area for search-and-rescue missions. The contrast between swarms and how drones are used by the military today is that the swarm could organize itself based on the situation and through interactions with each other to accomplish a purpose. While this technology is still in the experimentation stage, the reality of a swarm that is smart enough to coordinate its own behavior is moving much closer to reality. Along with the positive benefits of drone swarms to minimize casualties, at least for the offense, and more conveniently achieve a search-and-rescue objective, the thought of machines equipped with weapons to kill being able to "think" for themselves is fodder for nightmares. Regardless the negative possibilities, there seems to be little doubt that swarm military technology will ultimately be deployed in future conflicts.
2. Spying Smart Home Devices
For smart home devices to respond to queries and be as useful as possible, they should be listening and tracking information about you and your regular habits. When you added the Echo to your room as a radio and alarm clock (or any other smart device connected to the Internet), you also enabled a spy to enter your home. All the information smart devices collect about your habits such as your viewing history on Netflix; where you live and what route you take home so Google can tell you how to avoid traffic; and what time you mostly arrive home so your smart thermostat can make your family room the temperature you prefer, is stored in the cloud. Of course, this information makes your life more convenient, but there is also the probability for abuse. In theory, virtual assistant devices listen for a "wake word," before they activate, but there are instances when perhaps it will think you said the wake word and begin recording. Any smart device in your home, including gaming consoles and smart TVs, could be the entry point for abuse of your personal information. There are some defensive strategies like covering up cameras, turning off devices when not needed and muting microphones, but none of them are 100% foolproof.?
3. Facial Recognition
There are some amazingly useful applications for facial recognition, but it can just as perfectly be used for sinister purposes. China stands accused of using facial recognition technology for surveillance and racial profiling. Not only do China's cameras spot jaywalkers, but they have also monitored and controlled Uighur Muslims who live in the country. Russia's cameras scan the streets for "people of interest," and there are reports that Israel tracks Palestinians inside the West Bank. Alongside tracking people without their knowledge, facial recognition is plagued with bias. When an algorithm is trained on a dataset that is not diverse, it is less accurate and will misidentify people more.
4. AI Cloning
With the support of artificial intelligence (AI), all that’s necessary to create a clone of someone’s voice is just a snippet of audio. In the same way, AI can take several photos or videos of a person and then create a wholly new—cloned—video that appears to be an original. It’s become quite easy for AI to create an artificial YOU and the results are so convincing our brains have trouble differentiating between what is real and what is cloned. Deepfake technology that uses facial mapping, machine learning, and artificial intelligence to create representations of real people doing and saying things they never did is now targeting "ordinary" people. Celebrities used to be more at risk to being victims of deepfake technology because there was abundant video and audio of them to use to train the algorithms. However, the technology has advanced to the point that it doesn't require as much raw data to create a convincing fake video, plus there are a lot more images and videos of ordinary people from the internet and social media channels to use.
5. Ransomware, AI and Bot-enabled Blackmailing and Hacking
When high-powered technology appears into the wrong hands, it can be very effective to achieve criminal, immoral, and malicious activities. Ransomware, where malware is used to prevent access to a computer system until a ransom is paid, is growing according to the Cybersecurity and Infrastructure Security Agency (CISA). Artificial intelligence can automate tasks to get them done more effectively. When those tasks, such as spear phishing, are to send out fake emails to trick people into giving up their private information, the negative impact could be fascinating. Once the software is built, there is little-to-no cost to keep repeating the task over again. AI can quickly and efficiently blackmail people or hack into systems. Although AI is playing a tremendous role to combat malware and other threats, it's also being used by cybercriminals to perpetrate the crimes.
6. Smart Dust
Microelectromechanical systems (MEMS), the size of a grain of salt, have sensors, communication mechanisms, autonomous power supplies, and cameras in them. Also called motes, this smart dust has a plethora of positive uses in healthcare, security, and far more, but would be frightening to control if used for evil pursuits. While spying on a known enemy with smart dust could fall into the positive column, the invasion of a private citizen’s privacy would be just as easy.
7. Fake News Bots
GROVER is one AI system with the capacity of writing a fake news article from nothing more than a headline. AI systems such as GROVER create articles more believable than those written by humans. OpenAI, a nonprofit company backed by Elon Musk, created “deepfakes for text” that produces news stories and works of fiction so good, the organization initially considered not to release the research publicly to prevent dangerous misuse of the technology. When fake articles are promoted and shared as true, it can have serious ramifications for individuals, businesses, and governments.
Coupled with the positive uses of today’s technology, there is no doubt that it can be highly risky in the wrong hands.