Generative AI Insights, an InfoWorld blog open to outside contributors, provides a venue for technology leaders to explore and discuss the challenges and opportunities presented by generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content.
It’s no longer how good your model is, it’s how good your data is. Why privacy-preserving synthetic data is key to scaling AI.
Successful integration of AI into daily operations hinges on front-line employees, yet the impact on their morale is often overlooked.
Balancing performance, energy efficiency, and cost-effectiveness, CPUs adeptly handle the less-intensive inference tasks that make up the lion’s share of AI workloads.
The average user of AI lacks an adequate understanding of the tools they increasingly depend on for decision-making and work. We need to change that.
Fine-tuning and retrieval augmentation are time-consuming and expensive. A better way of specializing LLMs is on the horizon.
Generative AI not only makes analytics tools easier to use, but also substantially improves the quality of automation that can be applied across the data analytics life cycle.
Responsible AI isn’t really about principles, or ethics, or explainability. It can be the key to unlocking AI value at scale, but we need to shatter some myths first.
Through natural language queries and graph-based RAG, TigerGraph CoPilot addresses the complex challenges of data analysis and the serious shortcomings of LLMs for business applications.
Retrieval-augmented generation brings to generative AI the one big thing that was holding it back in the enterprise.
The key to reaping the benefits of AI while minimizing the risks is through responsible development and use. Here’s how SAS Viya puts ethical AI practices to work.
Hardware requirements vary for machine learning and other compute-intensive workloads. Get to know these GPU specs and Nvidia GPU models.
Generative AI promises to be transformative for software development, but only if we ensure that all code is analyzed, tested, and reviewed.
Proper context and data privacy should be top of mind for developers when writing applications on generative AI for B2B use cases.
Large language models can reshape business processes by automating substantial portions of complex tasks. But they can’t do it alone.
Developing AI and machine learning applications requires plenty of GPUs. Should you run them on-premises or in the cloud?
From managing data to scaling systems to funding initiatives for the long haul, every part of your generative journey will be a challenge.
Five key questions you should ask before embarking on the journey to create your own in-house large language model.
Generative AI will reshape how we develop AI-driven products for the physical economy, starting with the creation of synthetic data sets for challenging use cases.
Should your company leverage a public large language model such as ChatGPT or your own private LLM? Understand the differences.
Three powerful approaches have emerged to improve the reliability of large language models by developing a fact-checking layer to support them.
With the right architecture, AI and automation can help drive entire business operations. Here’s a roadmap.
Autonomous driving edge cases require complex, human-like reasoning that goes far beyond legacy algorithms and models. Large language models are getting there.
From faster vector search to collaborative Notebooks, SingleStore recently unveiled several AI-focused innovations with developers in mind. Let’s dive in.
Ever-larger datasets for AI training pose big challenges for data engineers and big risks for the models themselves.
Succeeding with generative AI requires the expertise to design new use cases and the ability to develop and operationalize genAI models. These are major challenges.
GenAI offers an opportunity to make data integration and business process automation not only easier to implement, but accessible to non-technical staff. That spells relief for IT teams.
Model quantization bridges the gap between the computational limitations of edge devices and the demands for highly accurate models and real-time intelligent applications.
How we can take advantage of generative AI, common application structures, and systematic code reuse to drive faster and more innovative digital product development.
There is no universal ‘best’ vector database—the choice depends on your needs. Evaluating scalability, functionality, performance, and compatibility with your use cases is vital.
If we replace junior developers with machines, we won’t have engineers trained to do the more thoughtful work required to move software forward.
The high costs of development and training and the lack of pricing transparency put commercial large language models out of reach for many companies. Open source models could change that.
By making cryptic machine data human readable, generative AI will dramatically reduce the time and energy IT teams spend on managing and interpreting data generated by operational systems.
The integration of large language models into many third-party products and applications present many unknown security and privacy risks. Here’s how to address them.
Hackers have infiltrated a tool your software development teams may be using to write code. Not a comfortable place to be.
As GPU-accelerated databases bring new levels of performance and precision to time-series and spatial workloads, generative AI puts complex analysis within reach of non-experts.
The hallucinations of large language models are mainly a result of deficiencies in the dataset and training. These can be mitigated with retrieval-augmented generation and real-time data.
Digital adoption platforms learn application usage patterns and user behaviors and walk workers through business processes in real time, offering guidance and automating tasks. They can help all of us get the most from AI.
Large language models have immense potential, but also major shortcomings. Knowledge graphs make LLMs more accurate, transparent, and explainable.
AI and machine learning will boost the creativity and problem-solving abilities of software developers. It will also establish a new oligopoly over the software industry.
The impacts of large language models and AI on cybersecurity range from the good to the bad to the ugly. Here’s what to watch out for, and how to prepare.
Generative AI is already proving helpful across many relatively basic use cases, but how does it hold up when tasked with more technical guidance?
Generative AI can provide valuable analysis and insights to IT operators. But what about data security, reliability, workflow integration, and the conditions needed for successful deployment?
By allowing the use of AI tools proven to be safe, but requiring them to be used within explicit guidelines, you can alleviate both employee frustration and organizational risk.
The excitement and turmoil surrounding generative AI is not unlike the early days of open source, or the Wild West. We can resolve the uncertainty and confusion.
Humans must be the custodians for preserving high-quality data as AI use continues to advance.
Generative AI is not the first new technology that has changed how software developers work. While developers have nothing to fear, the stakes will be high for their employers.
Unstructured text and data are like gold for business applications and the company bottom line, but where to start? Here are three tools worth a look.
For businesses and their customers, the answers to most questions rely on data that is locked away in enterprise systems. Here’s how to deliver that data to GPT model prompts in real time.
Build your own Java-based chatbot and get a feel for interacting with the ChatGPT API in a Java client.
Large language models like GPT-4 and tools like GitHub Copilot can make good programmers more efficient and bad programmers more dangerous. Are you ready to dive in?