India's largest platform and marketplace for AI & Analytics leaders & professionals

Sign in

India's largest platform and marketplace for AI & Analytics leaders & professionals

3AI Digital Library

What if AI scans legislation and allocates funds to agencies?

3AI February 15, 2021

New Treasury Department software points the way. But research suggests that it’s impossible to show that an artificial ‘superintelligence’ can be contained

.

If, like me, you’re worried about how members of Congress are supposed to vote on a stimulus bill so lengthy and complex that nobody can possibly know all the details, fear not — the Treasury Department will soon be riding to the rescue.

But that scares me a little too.

Let me explain. For the past few months, the department’s Bureau of the Fiscal Service has been testing software designed to scan legislation and correctly allocate funds to various agencies and programs in accordance with congressional intent — a process known as issuing Treasury warrants. Right now, human beings must read each bill line by line to work out where the money goes. If the program can be made to work, the savings will be significant.

Alas, there’s a big challenge. Plenty of tools exist for extracting data from HTML files (and, of course, XML files), but Congress initially publishes legislation only in PDF form; XML or HTML versions often arrive only weeks later. As many a business knows, scraping data from PDFs generally requires human intervention, leading to the possibility of copy errors. The trouble is that PDFs have no standard data format. Even “simple” methods for extraction generally are designed to work only if the data in question is already presented within the PDF in tabular form.

Treasury’s ambitious hope, however, is that its software, when fully operational, will be able to scan new legislation in its natural language form, figure out where the money is supposed to go and issue the appropriate warrants far more swiftly than humans could. The faster the warrants are issued, the sooner the agency that’s supposed to receive the money can start spending.

Pretty cool stuff.

Yet this snapshot of the future inspires a wicked train of thought. Suppose that the Treasury Department software — which you are free to describe as artificial intelligence or not, depending on your taste — is later replaced by a better program, then by a better one and finally by one that can mimic the working general intelligence of the human mind.

What’s to stop this future AI from deciding on its own that Congress was wrong to give another billion to Agency A when, in the judgment of the program, Agency B needs it more? The program makes a tiny adjustment in a gigantic spending bill, and given that nobody’s actually read it, nobody’s the wiser.

Sounds improbable, right? HAL 9000 meets “Person of Interest” meets Skynet?

Not so fast.

For technophiles like me, recent achievements in AI are exciting, even breathtaking. AI is credited with reorganizing supply chains to help overcome disruptions caused by the pandemic. Deep learning systems may be able to discover coronary plaques more accurately than clinicians.

So why worry? After all, most of those in the field, including my professors when I studied artificial intelligence as an undergraduate, are confident that tight programming will keep even the most advanced artificial intelligence from escaping the bounds set by its creators. (Think Isaac Asimov’s Laws of Robotics.)

But there have long been dissenters, even among the experts. The prospect of an out-of-control AI has haunted researchers in the field for almost as long as it’s haunted science fiction writers. One thinks of Joseph Weizenbaum’s “Computer Power and Human Reason,” published back in 1976, or even Norbert Wiener’s classic “God and Golem, Inc.,” based on lectures the author delivered in 1962.

All of which brings us to an unnerving paper published last month by six AI researchers who argue that it is impossible to show that an artificial “superintelligence” can be contained. The authors are an international group, representing universities in Germany, Spain, and Chile, as well as the U.S. According to their analysis, no matter how tightly an AI may be programmed, if it indeed possesses generalized reasoning skills “far surpassing” those of the most gifted humans, what they call “total containment” turns out to be incapable of formal proof.

Using what is known as computability theory, they hypothesize a superintelligent AI that incorporates a fundamental command never to harm humans. (Asimov again.) The programming will then require a function that decides whether a particular action will harm humans or not. They proceed to show that even if it’s possible “to articulate in a precise programming language” a perfect set of “control strategies” to implement this function, there’s no way to know for sure whether the strategies will in fact constrain the AI. (The proof, although technical, is rather elegant, and fun to read.)

Don’t get me wrong: I’m not arguing that the Treasury Department should abandon its quest for a system that extracts data from PDFs, any more than I’m suggesting that any of the countless researchers working on various aspects of AI should halt. I continue to find the prospect of true artificial intelligence as exciting as ever.

What concerns me, however, is the way that public critiques of AI tend to pick around the edges rather than go to the heart of the matter. We often charge nascent AI systems with enhancing bias — for example, by exacerbating rather than correcting disparities in the distribution of health care. Such issues are of undeniable public importance. But as the authors of the paper on computability remind us, you don’t have to be either a technophobe or a fan of apocalyptic steampunk sci-fi to see that the time for public conversation about the containability of AI is now, not later.

 

 

Picture from freepik.com

    3AI Trending Articles

  • IIT Delhi Offers Free Online Course on AI

    IIT Delhi is offering a free online 12-week programme on Artificial Intelligence with a certificate on completion. Indian Institute of Technology (IIT) Delhi is inviting registrations from those who would like to enrol for a 12-week long free online course on Artificial Intelligence. This programme will be offered on the National Programme on Technology Enhanced […]

  • Practical insights and best practices for Fine Tuned LLM based use cases for Governed Enterprises

    Featured Article: Author: Aditya Khandekar, President, Corridor Platforms India Objective There has been a lot of hype around Large Language Models (LLM) and their potential, but harnessing them to build real-world enterprise grade use cases is hard. This blog highlights key best practices and insights for building LLM powered use cases with a working example. […]

  • CxOs’ guide to achieve “escape velocity” for NextGen approaches!

    Featured Article Author: Vivek Mahendra, Managing Partner, Vivikta Advisory Prelude This is an Enterprise Architect’s perspective on the approach to scale the NextGen Solutions (read, AI/Generative AI/LLMs/Deep Learning led business solutions). A part of the 3 Point-of-View Thought Leadership series, this article focuses on the two foundational aspects – Leadership Archetypes its impact and learnings […]

  • Indian independent software vendors are choosing Oracle Cloud to run Critical Business Applications

    A large number of Indian independent software vendors (ISVs) are choosing to run their business critical applications on Oracle Cloud Infrastructure (OCI) to improve application performance A large number of Indian independent software vendors (ISVs) including GOFRUGAL, Medexpert, Ameyo and Information Dynamics, among others are choosing to run their business critical applications on Oracle Cloud Infrastructure (OCI) […]