Recording the model development process on the blockchain can make that process more structured, transparent, and repeatable, resulting in less bias and more accountability. Credit: Thinkstock The past few years have brought much hand wringing and arm waving about artificial intelligence (AI), as business people and technologists alike worry about the outsize decisioning power they believe these systems to have. As a data scientist, I am accustomed to being the voice of reason about the possibilities and limitations of AI. In this article I’ll explain how companies can use blockchain technology for model development governance, a breakthrough to better understand AI, make the model development process auditable, and identify and assign accountability for AI decisioning. Using blockchain for model development governance While there is widespread awareness about the need to govern AI, the discussion about how to do so is often nebulous, such as in “How to Build Accountability into Your AI” in Harvard Business Review: Assess governance structures. A healthy ecosystem for managing AI must include governance processes and structures…. Accountability for AI means looking for solid evidence of governance at the organizational level, including clear goals and objectives for the AI system; well-defined roles, responsibilities, and lines of authority; a multidisciplinary workforce capable of managing AI systems; a broad set of stakeholders; and risk-management processes. Additionally, it is vital to look for system-level governance elements, such as documented technical specifications of the particular AI system, compliance, and stakeholder access to system design and operation information. This exhaustive list of requirements is enough to make any reader’s eyes glaze over. How exactly does an organization go about obtaining “system-level governance elements” and provide “stakeholder access to system design and operation information”? Here is actual, actionable advice: Use blockchain technology to ensure that all of the decisions made about an AI or machine learning model are recorded and are auditable. (Full disclosure: In 2018 I filed a US patent application [16/128,359 USA] around using blockchain for model development governance.) How blockchain creates auditability Developing an AI decisioning model is a complex process that comprises myriad incremental decisions—the model’s variables, the model design, the training and test data utilized, the selection of features, and so on. All of these decisions could be recorded to the blockchain, which could also provide the ability to view the model’s raw latent features. You could also record to the blockchain all scientists who built different portions of the variable sets, and who participated in model weight creation and model testing. Model governance and transparency are essential in building ethical AI technology that is auditable. As enabled by blockchain technology, the sum and total record of these decisions provides the visibility required to effectively govern models internally, ascribe accountability, and satisfy the regulators who are definitely coming for your AI. Before blockchain: Analytic models adrift Before blockchain became a buzzword, I began implementing a similar analytic model management approach in my data science organization. In 2010 I instituted a development process centered on an analytic tracking document (ATD). This approach detailed model design, variable sets, scientists assigned, training and testing data, and success criteria, breaking down the entire development process into three or more agile sprints. I recognized that a structured approach with ATDs was required because I’d seen far too many negative outcomes from what had become the norm across much of the financial industry: a lack of validation and accountability. Using banking as an example, a decade ago the typical lifespan of an analytic model looked like this: A data scientist builds a model, self-selecting the variables it contains. This led to scientists creating redundant variables, not using validated variable design and creating of new errors in model code. In the worst cases, a data scientist might make decisions with variables that could introduce bias, model sensitivity, or target leaks. When the same data scientist leaves the organization, his or her development directories are typically either deleted or, if there are a number of different directories, it becomes unclear which directories are responsible for the final model. The bank often doesn’t have the source code for the model or might have just pieces of it. Just looking at code, no one definitively understands how the model was built, the data on which it was built, and the assumptions that factored into the model build. Ultimately the bank could be put in a high-risk situation by assuming the model was built properly and will behave well—but not really knowing either. The bank is unable to validate the model or understand under what conditions the model will be unreliable or untrustworthy. These realities result in unnecessary risk or in a large number of models being discarded and rebuilt, often repeating the journey above. A blockchain to codify accountability My patent-pending invention describes how to codify analytic and machine learning model development using blockchain technology to associate a chain of entities, work tasks, and requirements with a model, including testing and validation checks. It replicates much of the historical approach I used to build models in my organization—the ATD remains essentially a contract between my scientists, managers, and me that describes: What the model is The model’s objectives How we’d build that model, including prescribed machine learning algorithm Areas that the model must improve upon, for example, a 30% improvement in card not present (CNP) credit card fraud at a transaction level The degrees of freedom the scientists have to solve the problem, and those which they don’t Re-use of trusted and validated variable and model code snip-its Training and test data requirements Ethical AI procedures and tests Robustness and stability tests Specific model testing and model validation checklists Specific assigned analytic scientists to select the variables, build the models, and train them and those who will validate code, confirm results, perform testing of the model variables and model output Specific success criteria for the model and specific customer segments Specific analytic sprints, tasks, and scientists assigned, and formal sprint reviews/approvals of requirements met. As you can see, the ATD informs a set of requirements that is very specific. The team includes the direct modeling manager, the group of data scientists assigned to the project, and me as owner of the agile model development process. Everyone on the team signs the ATD as a contract once we’ve all negotiated our roles, responsibilities, timelines, and requirements of the build. The ATD becomes the document by which we define the entire agile model development process. It then gets broken into a set of requirements, roles, and tasks, which are put on the blockchain to be formally assigned, worked, validated, and completed. Having individuals who are tracked against each of the requirements, the team then assesses a set of existing collateral, which are typically pieces of previous validated variable code and models. Some variables have been approved in the past, others will be adjusted, and still others will be new. The blockchain then records each time the variable is used in this model—for example, any code that was adopted from code stores, written new, and changes that were made—who did it, which tests were done, which modeling manager approved it, and my sign-off. A blockchain enables granular tracking Importantly, the blockchain instantiates a trail of decision making. It shows if a variable is acceptable, if it introduces bias into the model, or if the variable is utilized properly. The blockchain is not just a checklist of positive outcomes, it’s a recording of the journey of building these models—mistakes, corrections, and improvements are all recorded. For example, outcomes such as failed Ethical AI tests are persisted to the blockchain, as are the remediation steps used to remove bias. We can see the journey at a very granular level: The pieces of the model The way the model functions The way the model responds to expected data, rejects bad data, or responds to a simulated changing environment All of these items are codified in the context of who worked on the model and who approved each action. At the end of the project we can see, for example, that each of the variables contained in this critical model has been reviewed, put on the blockchain, and approved. This approach provides a high level of confidence that no one has added a variable to the model that performs poorly or introduces some form of bias into the model. It ensures that no one has used an incorrect field in their data specification or changed validated variables without permission and validation. Without the critical review process afforded by the ATD (and now the blockchain) to hold my data science organization auditable, my data scientists could inadvertently introduce a model with errors, particularly as these models and associated algorithms become more and more complex. Model development journeys that are transparent result in less bias In sum, overlaying the model development process on the blockchain gives the analytic model its own entity, life, structure, and description. Model development becomes a structured process, at the end of which detailed documentation can be produced to ensure that all elements have gone through the proper review. These elements also can be revisited at any time in the future, providing essential assets for use in model governance. Many of these assets become part of the observability and monitoring requirements when the model is ultimately used, versus having to be discovered or assigned post-development. In this way, analytic model development and decisioning becomes auditable, a critical factor in making AI technology, and the data scientists that design it, accountable—an essential step in eradicating bias from the analytic models used to make decisions that affect people’s lives. Scott Zoldi is chief analytics officer at FICO responsible for the analytic development of FICO’s product and technology solutions. While at FICO, Scott has been responsible for authoring more than 110 analytic patents, with 71 granted and 46 pending. Scott is actively involved in the development of new analytic products and big data analytics applications, many of which leverage new streaming analytic innovations such as adaptive analytics, collaborative profiling, and self-calibrating analytics. Scott is most recently focused on the applications of streaming self-learning analytics for real-time detection of cybersecurity attacks. Scott serves on two boards of directors, Software San Diego and Cyber Center of Excellence. Scott received his PhD in theoretical and computational physics from Duke University. — New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com. Related content news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages news Visual Studio 17.12 brings C++, Copilot enhancements Debugging and productivity improvements also feature in the latest release of Microsoft’s signature IDE, built for .NET 9. By Paul Krill Nov 13, 2024 3 mins Visual Studio Integrated Development Environments Microsoft .NET news Microsoft’s .NET 9 arrives, with performance, cloud, and AI boosts Cloud-native apps, AI-enabled apps, ASP.NET Core, Aspire, Blazor, MAUI, C#, and F# all get boosts with the latest major rev of the .NET platform. By Paul Krill Nov 12, 2024 4 mins C# Generative AI Microsoft .NET news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos