How data is the USPTO’s “Liquid Capital”


SOURCE: FORBES.COM
SEP 25, 2021

LOCATION: Alexandria, Virginia DATE: March, 15, 2016 — Portrait of Brigit Baron and Scott Beliveau of Office of the Under Secretary (OUS) of the United States Patent and Trademark Office (USPTO). CREDIT: Jay Premack/USPTO SCOTT BELIVEAU

The main role of the US Patent and Trademark Office (USPTO) is to grant patents for inventions as well as register trademarks and service marks for products and services. One of the key ways they are advancing this mission is to use automation and AI in a number of ways to improve operational efficiency for their patent examiners.

At the upcoming October AI in Government online event, Scott Beliveau, Branch Chief Of Advanced Analytics at USPTO shares with his insights into how the USPTO is leveraging data, automation, and AI to help advance efforts. In this interview, he identifies how a small scrappy team at USPTO created an award winning AI/ML program that is saving the USPTO tens of millions of dollars and is serving over 200 million public requests annually. Analytics, automation, and AI work together at the USPTO in a number of different use cases and examples of successful AI implementation.

What are some innovative ways you’re leveraging data and artificial intelligence (AI) to benefit the USPTO?

Scott Beliveau: As a fee-funded agency, data is the USPTO’s “liquid capital” – we see it as an asset for improving our internal decision-making, and a way to empower entrepreneurs and innovators. Data supports our program evaluations, and fuels our business cases, and financial analyses. Data also enables our agency to identify cost savings, enable predictive planning, and to improve policy and program operations.

High-quality data also feeds AI at the USPTO. Patents’ AI efforts primarily focus on natural language processing (NLP) technologies to support patent search and classification. Trademark efforts focus on commercial computer vision products to detect fraud. The use of these AI technologies can help us in our drive to issue high-quality and timely patent and trademark applications.

How are you leveraging automation at all to help on your journey to AI?

Scott Beliveau: Our path to AI has really been more the story of a data journey. We started by establishing a data foundation through a shareable and "social" platform (DeveloperHub) to showcase unique ways to use our data and combine it with other datasets. People could take our data, use it, build off it, and give us more information to continue the cycle. This data foundation then enabled us to use natural language programming to extract and codify information for recognition. Today, our data is used in countless areas including inclusion in the Pile dataset, a development in the AI / NLP research community.

How do you identify which problem area(s) to start with for your automation and cognitive technology projects?

Scott Beliveau: We always start from a customer value point of view, rather then what IT can do. We then run through a series of questions such as “What do you want?” “What would you do with it if you got it?” or “How much is it worth to you?” With answers in hand, we focus our efforts to deliver incremental wins towards supporting longer-term efforts.

What are some of the unique opportunities the public sector has when it comes to data and AI?

Scott Beliveau: Our agency has data covering every imaginable innovation in the past 250 years. As a public servant, I often get to meet with inventors and hear their stories about how they used the data or our public AI services to create a new company or do a better job. Working in the public sector affords that unique opportunity to impact the lives of many people.

What are some use cases you can share where you successfully have applied AI?

Scott Beliveau: The USPTO currently has two real world examples for AI working now in production: Enriched Citations and Auto Classification.

The first production usage of AI at the USPTO was an effort called “Enriched Citations”. Our team used natural language processing (NLP) to deconstruct patent application responses (called Office Actions) and to create enriched citations that made research easier and faster for stakeholders and international partners. This approach used design thinking from the user perspective to understand the needs of stakeholders and the myriad of data variables required in order to deliver user-centric results. The NLP model proved to be both faster and more accurate than the prior work of dozens of experts. Using NLP saved the agency millions of dollars in enriched citation implementation.

We also deployed AI and machine learning (ML) in our patent classification efforts. Every innovation that the USPTO receives is classified into one or more symbols from over a few hundred thousand categories. Our current, manual classification service is comparatively slow and costly. Our new AI/ML algorithms, dubbed AutoClass, have been “trained” to classify patent and non-patent documents with classification symbols in hours, at a tenth of the cost and with similar quality. This service incorporates user feedback to verify and validate the accuracy of results. AutoClass provides seamless integration into our routing and search functions with significant cost savings. This new, smarter routing system has already saved time and millions of dollars for the agency and its customers.

What are some challenges when it comes to AI and ML in the public sector?

Scott Beliveau: One of the challenges we face with AI and ML in the public sector as an administrative agency is striking the appropriate balance between explainability and transparency. Explaining the rationale for our decisions is key to ensuring faith and transparency in the IP system. Transparency in both training data and algorithms is critically important as any biases could result in unintended negative impacts to applicants. At the same time, requiring full transparency potentially opens the process to “gaming” by people seeking to manipulate the process. Full transparency also potentially limits the USPTO’s ability to use private sector ML services since many of those leverage proprietary trade secrets.

How do analytics, automation, and AI work together at the USPTO?

Scott Beliveau: Analytics, automation, and AI are all critical for our data program and lifecycle. Our patent examiners and trademark attorneys use data in every step of the process as they make legal determinations whether to grant a patent or to register a trademark. Teams at the USPTO conduct analytics on the data captured during each step of this process to identify opportunities for improvement. We take advantage of those opportunities to improve using automation, AI/ML, or non-IT activities. Finally, we use data to evaluate the results of these improvements; thereby completing the continuous learning cycle.

How are you navigating privacy, trust, and security concerns around the use of AI?

Scott Beliveau: Carefully. IP-related industries, according to a 2016 study by the Department of Commerce, accounts for 30 percent of the employment in the United States. Not keeping innovation secure (until it can statutorily be shared) can have disastrous consequences to a small business or to our nation’s global competitiveness. Security is something that is a top concern and most definitely drives every step in our decision creating, launching, and using AI technologies.

What are you doing to develop an AI-ready workforce?

Scott Beliveau: As an agency with thousands of computer scientists and engineers looking at the latest and greatest technology every day, the USPTO has a great head start on developing an AI-ready workforce. However AI is a quickly-moving field, and we have found that the best way to encourage AI-readiness is to promote an organizational culture that is both continually learning and that has a passion for embracing internal innovation as much as it embraces the innovation seen in the applications received every day. Build – Measure – Learn and repeat, and if something does not work, learn from it, and move on.

What AI technologies are you most looking forward to in the coming years?

Scott Beliveau: Collaborative intelligence. Machines are great at processing large amounts of data more quickly, freeing people up for less mundane or repetitive tasks they do. I’m particularly interested in seeing how innovators are able to take advantage of advances in collaborative intelligence — not simply to automate processes, but how to redesign processes to take advantage of collaborative intelligence technologies.

Scott Beliveau will be presenting at an upcoming AI in Government online event where he will get the opportunity to dig deeper into these areas as part of the virtual, online event.

Similar articles you can read