In a recent McKinsey Future of Work podcast interview with Microsoft Chief Technology Officer Kevin Scott, the CTO revealed that artificial intelligence (AI) is about to go prime time and will begin showing up in the most unlikely of places – including our nation’s farm fields
Once a mysterious science being beta-tested by only Fortune 100 companies, AI is rapidly becoming more democratic, inclusive, and utilitarian – to even those residing in under-served communities. It’s also becoming a versatile tool that can branch out to myriad market sectors, including one of our oldest – agriculture.
Details and Implications
Scott recently published a book entitled Reprogramming the American Dream: From Rural America to Silicon Valley – Making AI Serve Us All. The findings come from his personal experiences with AI being implemented to service populations in rural towns and working-class communities, rather than just hi-tech cities or corner offices.
The Wall Street Journal recently reported on Microsoft’s FarmBeats program – a platform that leverages AI to improve farming outcomes. Using a machine learning tool that evaluates weather conditions and hydrology data, Microsoft was able to assist farmers in determining the best time to conduct seed planting. In India, for instance, the program saw crop yields experience a significant 30% increase among the 3,000 participating farms in the program. Microsoft is now testing the same platform in Washington State.
AI’s uses will be far-flung and far-reaching. Not only in helping farmers make smarter decisions in an ecosystem filled with ever-changing variables, but also in the supply chain that gets the farm’s products to the market and then to the consumer who eats the meal.
Here’s why we expect AI to flourish in the next few years.
– Access to all: The rise of self-supervised AI models and open-source tools will democratize AI development. Scott sees the proliferation of open-source software, cloud computing, and tutorial content on sites such as YouTube conceivably making it possible for “a motivated high school student” to solve complex problems in one weekend versus the six months. “All the indicators point to the fact that they’re going to become further democratized over the next handful of years,” he said.
– Safeguards to combat bias: Bad data sets are the Achilles heel of AI-based applications. It’s one of the chief reasons AI hasn’t taken off as quickly as everyone thought it would. Bias that creeps into data yield slanted results which can be very problematic when it comes to achieving useful outcomes. Fortunately, GANs (generative adversarial networks), which are another type of neural network, can generate synthetic data to compensate for those biases. Essentially GANs can train on unbiased data sets even in the absence of naturally occurring representative data.
AI can and should serve under-represented populations in places ranging from rural Virginia to the plains of Tanzania. To help facilitate this, we have developed the Mindful AI approach to help businesses develop AI solutions that are more valuable because they are relevant and useful to the people they serve – rather than just the producers of the platform.
It’s an extremely impressive concept that consists of three components:
– Human-centered: end-to-end, human-in-the-loop integration in the AI solution development lifecycle, from concept, discovery, data collection, model testing, and training to scaling. We have access to hundreds of thousands of crowdsourced resources worldwide. Our team collectively speaks more than a combined 200+ languages on platform to annotate and curate data for all AI training needs. The result is the creation of lovable experiences and products with measurable outcomes.
– Responsible: Ensuring that AI systems are free of bias and that they are grounded in ethics. Being mindful of how, why, and where data is created and their ethical impact on downstream AI systems. We make AI technology more inclusive by working with under-represented communities through the diversity we need for our data programs (age, gender, groups, genre, geographies, ethnicities, cultures, and languages) to help our customers build more inclusive AI-based products and to reduce bias. We also frequently reach out to local organizations representing people representing various under-represented communities.
– Trustworthy: Being transparent and explainable in how the AI model is trained, how it works, and why they recommend the outcomes. Our expertise with AI localization makes it possible for our clients to make their AI applications more inclusive and personalized, respecting critical nuances in local language and user experiences that can make or break the credibility of an AI solution from one country to the next. For example, we design our applications for personalized and localized contexts, including languages, dialects, and accents in voice-based applications. That way, an app brings the same level of voice experience sophistication to every language, from English to under-represented languages.
By designing an AI application more mindfully from the start, a business will set itself up to be more effective and inclusive in its development of AI applications that help people no matter where they live. Tools such as the Mindful AI Canvas exist to help businesses begin that journey.
AI can potentially burst free from high tech to serve far-flung areas of the world and from the farms to the fields. The democratization of AI learning tools is an important step, and we believe Mindful AI can provide essential help.
About the Author
Ahmer Inam, Chief Artificial Intelligence Officer at Pactera EDGE, is an experienced data and analytics executive with 20+ years of experience in leading organizational transformation using data, technology, information systems, analytics, and data products.
Featured image: ©Art_Kevorkov