It’s clear that artificial intelligence is powerful and cost-effective in the public cloud. However, it can be weaponized for unethical tasks. Just ask any marketing person—it’s their job to keep demand for a product or service high. So they depend on advertising and other methods to create brand recognition and a sense of demand for what they sell. These days marketing firms are even more clever, recruiting social media influencers who promote a product or service directly or indirectly—sometimes without disclosing that they are a paid lackey. We’re getting better at influencing humans, either by using traditional advertising methods, such as keyword advertising, or, even scarier, by leveraging AI technology as a way to change hearts and minds. Often “the targets” don’t even understand that their hearts and minds are being changed. Researchers have discovered a challenge presented by the AI-powered speech generator GPT-2, released by OpenAI in 2019. The AI research lab’s chat tool excited the tech community with its capability of generating convincingly coherent language from any type of input. Shortly after GPT-2’s creation, observers warned that the powerful natural language processing algorithm wasn’t as innocuous as people thought. Many pointed out an array of risks that the tool could pose, especially from those who might seek to weaponize it to do less-than-ethical things. The core concern was that text generated by GPT-2 could persuade people to break ethical norms that had been established during a lifetime of experiences. This is not Manchurian Candidate stuff, where you’ll be able to activate a zombie-like killer, but really more gray-area decisions. Consider, for example, a person who will likely not stretch the rules for personal gain, such as stealing a customer from another salesperson. Can that moral person be swayed by an AI system that’s able to influence human behavior by leveraging its training? Cloud computing has made AI systems affordable and easy to leverage as a force multiplier of existing or net-new business applications. For example, if a sales processing system could convince buyers to purchase just 2% more using AI influences, that could mean as much as a billion dollars in additional profit with minimum investment. The true question is: Even if we can, should we? I’ve been doing AI since my college years, and among the reasons it’s so interesting, is how you can set up these systems to learn independently and change behaviors based on their learning over time. For years people have predicted the impending domination of our new robot overlords, but AI is still just a tool and should not be a threat—yet. Although many are calling for guidelines and even government regulation of the potential use and abuse of AI (mostly cloud-based), I’m not sure we’re there yet. I do think we’ll see some questionable uses of this technology, much the same as tracking apps on our phones during the past few years, but this stuff is largely self-regulating. If companies or governments are outed for weaponizing this technology in such a way that public reaction is negative, public pressure will be the regulating mechanism. As with any technology, misuses will have to be looked at over time. I have some confidence that human intelligence will do the right thing with nonhuman intelligence, at least for now. Related content analysis Strategies to navigate the pitfalls of cloud costs Cloud providers waste a lot of their customers’ cloud dollars, but enterprises can take action. By David Linthicum Nov 15, 2024 6 mins Cloud Architecture Cloud Management Cloud Computing analysis Understanding Hyperlight, Microsoft’s minimal VM manager Microsoft is making its Rust-based, functions-focused VM tool available on Azure at last, ready to help event-driven applications at scale. By Simon Bisson Nov 14, 2024 8 mins Microsoft Azure Rust Serverless Computing how-to Docker tutorial: Get started with Docker volumes Learn the ins, outs, and limits of Docker's native technology for integrating containers with local file systems. By Serdar Yegulalp Nov 13, 2024 8 mins Devops Cloud Computing Software Development news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos