Five AI Predictions for 2025

Laura Baldwin, President of O’Reilly offers her top five predictions for AI progress in the coming year.
  • What corporate responsibilities should a company consider when leveraging generative AI (GenAI) solutions?

Prediction: To more responsibly address the impact of AI and automated systems, organisations will adopt balanced overarching AI policies that account for the evolving obligations they have to all stakeholders, including shareholders, employees, customers, and business partners.

Companies must come to grips with the scale at which AI works as well as the consequences (both positive and negative) of using it—particularly if AI tools are implemented haphazardly across the enterprise without the benefit of an overarching AI policy.

Introducing a new technology like AI doesn’t change a company’s basic responsibilities. However, organisations must be careful to continue living up to those obligations. Many people, for instance, believe that a corporation’s sole responsibility is to maximise short-term shareholder value, with little or no concern for the long term. But in that scenario, everybody loses, including stockholders who don’t realise their short-term profits will have long term consequences.

The problem that AI introduces is the scale at which automated systems can cause harm. AI magnifies issues that are easily rectified when they affect a single person. These are thorny challenges that can only be solved by remembering that corporate responsibility extends beyond your shareholders. Your employees are also stakeholders, as are your customers and business partners. Today, it’s anyone participating in the AI economy—and we need a balanced approach to the entire ecosystem.  

  • What ethical principles should guide the development and use of AI models?

Prediction: As organisations adopt AI technologies more broadly, there will be an increased need to properly train employees to critically evaluate AI-generated results and understand the limitations of these tools.

It’s important to educate your staff on the benefits and liabilities of using AI systems. By now, everyone knows that large language models make mistakes, also known as hallucinations. To avoid being deceived by the overconfident voice of many AI tools—and to ensure that results align with ethical requirements—your employees must be able to critically examine the output of an AI system and determine whether it’s correct. This is especially true for technical employees who are developing applications that use AI systems through an API.

Ultimately, it’s an employer’s responsibility to ensure that their staff have the appropriate training to detect and correct errors within AI systems and to validate that these results align with the values their organisation professes.

  • What are the primary concerns when it comes to AI and intellectual property infringement?

Prediction: Organisations that don’t consider the possibility of intellectual property infringement may reap the rewards promised by GenAI, but they’re courting risk. Those that heed copyrights and want to avoid legal turmoil will find ethical ways to leverage the technology.

Lawsuits. They’re expensive. And there are more than 25 of them currently pending against AI companies for using copyrighted works to train AI models without permission. If your organisation is using one of these companies to produce work that may be found to have illegally leveraged copyrighted content, what’s your exposure?

That said, fear of lawsuits can’t be the only thing driving your organisation’s decision making around AI. More important is ensuring that you’re staying true to your stakeholders. How you work with your partners and retain their trust can make or break your future relationship with them. For example, if you’re a company that works with content, how can you legally and ethically leverage that content to be used by an AI tool without infringing on creators’ copyrights? O’Reilly is an online learning platform that works with authors and subject-matter experts to create content for our users. It has long been our approach for our authors to retain copyright for the content they create with us, while we maintain exclusive rights to leverage that content however we see fit. We’ve worked hard to develop an AI solution that pays our experts when their content is used to generate an answer for a user. So, authors get paid, and we get to provide a beneficial learning tool for our platform members. Everybody wins; zero copyright infringement.

  • How can organisations ensure accountability to original content creators?

Prediction: As organisations expand their use of AI-generated content, prompt management will become an important strategy for demonstrating accountability to original content creators.

While a company’s legal obligations are unclear when it comes to using copyrighted works in training data, treating business partners fairly remains a primary corporate responsibility. Allowing an AI solution to generate text without accountability is not responsible, given that the technology exists to credit creators when their content is leveraged by GenAI. A better strategy is to manage these situations with careful prompting: for example, a system prompt or a RAG application that controls what sources are used to generate output, making attribution easier.

  • Beyond copyright concerns, what challenges do AI technologies pose to the workforce?

Prediction: To capitalise on employee productivity, organisations will invest more in providing training and resources to effectively leverage AI.

AI is a fascinating frontier technology, one I believe will significantly boost worker productivity, inspire healthcare advances, encourage technical innovation, and more. And there’s enough economic potential that all organisations can share in one of the great advances of our time. But some adaptation will be required. Innovations change the kinds of jobs that are in demand, and that’s proved true in every technology cycle over the past 100 years. Web developers grew popular with the advent of tools and languages that allowed people to build their own websites. Cloud engineers were in vogue when the cloud became the ubiquitous way companies operated their systems. And with the emergence of GenAI, now prompt engineers have moved to the forefront.

With innovation comes disruption, and disruption drives action. We can expect to see even more new roles appear as AI technology develops, although that doesn’t mean the old roles are obsolete. AI will be a huge productivity tool for any number of positions, and that’s why training is so important. Employees need to understand what AI technology can do besides acting as an interesting search engine; they’ll have to learn to work with AI to enhance—and even expand upon—what they’re already accomplishing. An organisation looking to the future will provide its employees with the learning tools and the context for how AI can enable their roles, setting both the company and its workers up for long-term success.


About the Author

Laura Baldwin is President of O’Reilly. Inspiring the future for more than 45 years. We share the knowledge and teach the skills people need to change their world. For more than 45 years, O’Reilly has imparted the world-shaping ideas of innovators through books, articles, conferences, and our online learning platform. When individuals, teams, and entire enterprises connect with the world’s leading experts and content providers, anything is possible. Whether you’re working to advance your career, be a better manager, or achieve the next breakthrough in technology or business, learning new skills is at the heart of it all. With a range of formats including live online training courses, interactive tutorials, books, videos, and case studies, we equip all members of the workforce with the insight they need to stay ahead in an ever-changing economy.

Featured image: Adobe Stock

more insights