Artificial Intelligence in Drug Development: Patent Considerations – IPWatchdog.com
September 25, 2023, 08:15 AM 2 Share“The transformative potential of AI, encompassing machine learning and generative capabilities, holds great
September 25, 2023, 08:15 AM 2
Share
“The transformative potential of AI, encompassing machine learning and generative capabilities, holds great promise for revolutionizing drug discovery and development. This promising horizon is accompanied by intricate challenges.”
Artificial intelligence (AI) is a field of computer science that creates software or models that mimic human reasoning or inference. Machine learning is a subset of AI which uses algorithms trained on massive amounts of data to allow the computer to learn with gradually improving accuracy without explicitly being programmed. The biopharmaceutical and healthcare fields produce massive amounts of data, including properties and characteristics of drug compounds, biological, genomic, and clinical data, efficacy of treatments, adverse events and risks, and electronic health records. The data may come from many sources, both public and proprietary. AI systems trained on such data can streamline and optimize the drug development process, including drug discovery, diagnosing diseases, identifying treatments and risks, designing clinical trials, and predicting safety and efficacy profiles, leading to increasing efficiency and reducing costs.
AI can analyze large datasets relating to a massive number of chemical compounds and use algorithms to identify potential drug candidates for further testing. AI has the potential to use algorithms and biological and chemical data to make modifications to existing chemical compounds to “conceive” new chemical entities with predicted efficiency and safety characteristics, in lieu of using traditional medicinal chemistry methods. AI can also find new uses or therapies for old compounds or existing drugs. AI can also analyze historical clinical trial data to predict potential successes and risks of drug candidates. AI can analyze health records, genetic data, medical histories to identify eligible participants for clinical trials, leading to improved recruitment and efficiency. Finally, AI, using algorithms and patient data, may be able to develop personalized therapies to provide customized treatment based on personal health history and genetic data. See generally Daniel Lee, AI Pharma: Artificial Intelligence in Drug Discovery and Development 28 (2023).
In short, AI is poised to revolutionize drug development. But what are the patent-related issues one must consider in using AI for developing drugs? For example, can AI-generated drug compounds be patented? Who owns AI-generated inventions? I will discuss some of these issues.
Abstract ideas, mental steps, and mathematical algorithms are not patentable. See Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 216 (2014). The concept of “mental steps” includes tasks conducted by or with the assistance of computers, which would include AI. See M.P.E.P. § 2106. If an AI-assisted claimed invention is drawn to an “abstract idea” or a “mental step,” then the claim must recite something more to “‘transform the nature of the claim’ into a ‘patent-eligible application.’” Alice, 573 U.S at 217.
It may be difficult spell out the inner workings of an AI model to transform the claim into something having an inventive concept. An applicant, however, can still focus on the specifics of the data input and the generative output of an AI model. For example, one can theoretically draft a claim having enough specifics about molecule selection, target identification, treatment steps, or clinical design to transform the claimed subject matter into something that is non-conventional and “inventive.”
The Federal Circuit held in Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022) that “inventors must be natural persons; that is human beings.” Id. at 1210. Since an AI system cannot be an inventor, any invention solely made by AI would be ineligible for patent protection. The Thaler decision leaves open the “question of whether inventions made by human beings with the assistance of AI are eligible for patent protection.” Id. at 1213. The answer to this question may depend on the amount and quality of the assistance made by AI.
To be an inventor or a joint inventor, the law requires one to have contributed to the conception of the invention or to its reduction to practice in some significant manner. See Dana-Farber Cancer Institute v. Ono Pharm. Co., 964 F.3d 1365, 1371 (Fed. Cir. 2020). If an invention were jointly “invented” by AI and humans, would that invention be eligible for patent protection? If an AI system contributes to the conception of the invention, or to its reduction to practice in a significant manner, the AI system may be considered a joint inventor even though other humans also contributed to the conception of the invention. But under the rationale espoused in Thaler, if AI cannot be an inventor because it is not a human being, then AI may also not be a joint inventor since a joint inventor must also be an “individual,” i.e., a human being. See 35 U.S.C. § 100(g). Because AI cannot be identified as an inventor, an invention jointly made by humans and AI may also not be entitled to patent protection.
Since AI is just a machine, it could be argued that it has no “mind” and would be incapable of forming any “conception.” After all, conception is “the formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention, as it is hereafter to be applied in practice.” Burroughs Wellcome Co. v. Barr Labs., Inc., 40 F.3d 1223, 1228 (Fed. Cir. 1994) (emphasis added). One should note, however, that the concept of “mental steps” for the patent-eligibility analysis includes tasks performed by or assisted with computers. Further, “[a]n idea is sufficiently definite and permanent when only ordinary skill would be necessary to reduce the invention to practice, without extensive research or experimentation.” Bd. of Educ. v. Am. Bioscience, 333 F.3d 1330, 1338 (Fed. Cir. 2003).
Under this logic, AI would just be considered a tool used by humans to conduct scientific research, just as a human would use a computer, a DNA sequencer, or other laboratory equipment or machine. The resulting work would be attributed to the human using the tools. Therefore, if AI generates a chemical compound as having potential activity, the conception of such compound would be attributed to the human who supervised or managed the AI to obtain the structure of the compound.
Moreover, arguably, there is no “permanent idea of a complete and operative invention” until the utility of the invention is confirmed by a human mind. One could argue that the human that confirms the utility of an invention generated by AI is the individual who conceives of the invention. See Bd. of Educ., 333 F.3d at 1340 (“[G]eneral knowledge regarding the anticipated biological properties of groups of complex chemical compounds is insufficient to confer inventorship status with respect to specifically claimed compounds.”).
At the end of day, companies using AI for drug development must carefully document conception and reduction to practice by humans of any potential inventions. A human must contribute to the conception or reduction to practice in a significant way, rather than merely providing well-known concepts or the current state of the art, so as to be considered an inventor of an AI-assisted invention.
Can AI be considered a POSA? AI systems are typically trained by learning from a massive amount of data, which can include prior art patents and publications. Thus, AI may be the closest thing to a hypothetical person presumed to be aware of all the prior art. But, as a matter of policy, if AI cannot be an “inventor,” it seems that AI should also not be considered a POSA.
Even if AI itself cannot be considered a POSA, can a POSA be a hypothetical being who has access to AI, assuming AI has become a common tool used by researchers in the pertinent field? On the other hand, if AI models used by researchers are trained with datasets that contain non-public, proprietary information, it may be difficult to argue that such AI models should be considered accessible to a POSA.
Nevertheless, if AI could be considered a POSA or a hypothetical person having access to AI, would any invention ever be considered non-obvious? After all, AI is not only designed to simulate human intelligence capable of conducting human-like reasoning and inferences but is also trained with massive amounts of data that it can analyze and process to generate outputs at speeds far exceeding what a human mind can do.
In determining obviousness, however, the law requires that “[p]atentability shall not be negated by the manner in which the invention was made.” 35 U.S.C. § 103. Hence, it could be argued that the fact that scientists would use AI in conceiving and developing new inventions should not be held against them in determining whether the resulting inventions would have been obvious.
An applicant has the duty to disclose information material to patentability to the U.S. Patent and Trademark Office (USPTO). 37 C.F.R. § 1.56. Failing to carry out this duty may result in a finding of inequitable conduct, rendering the patent unenforceable. See Therasense, Inc. v. Becton Dickinson and Co., 649 F.3d 1276, 1288 (Fed. Cir. 2011) (en banc).
Under USPTO rules, information is material to patentability when “it establishes, by itself or in combination with other information, a prima facie case of unpatentability of a claim.” 37 C.F.R. § 1.56(b). Under Federal Circuit law, “the materiality required to establish inequitable conduct is but-for materiality,” that is whether the USPTO would have allowed the claim if it had been aware of the undisclosed information. Therasense, 649 F.3d at 1291. The court also recognizes an exception to the but-for test “in cases of affirmative egregious misconduct,” such as “filing of an unmistakably false affidavit.” Id., 1292.
Does an applicant have any duty to disclose to the USPTO that AI was used to develop the invention if AI was not identified as an inventor or a joint inventor? Given that AI cannot be an inventor, information regarding the level of AI’s involvement in the invention may be material to patentability. If the quality of contribution made by AI rises to an inventive level, i.e., if AI made contribution to the conception or significant reduction to practice of the invention, the USPTO may deem such an AI-generated invention to be ineligible for patent protection. Filing an inventor declaration without identifying AI as an inventor or a co-inventor may also constitute egregious conduct.
Out of an abundance of caution, even if the contribution made by AI does not rise to an inventive level, the applicant may consider disclosing the information about AI’s involvement to the USPTO and explaining that the AI contribution was not inventive.
Who owns AI-generated or AI-assisted inventions? If AI-generated inventions cannot be patented, the only way to retain proprietary rights may be through other means, such as trade secrets or confidential know-hows. This is obviously not ideal since to obtain Food and Drug Administration (FDA) approval of a drug product, FDA regulations require public disclosure of many aspects of pharmaceutical research, such as the structure of the chemical entity in the drug. Once public disclosure is made, trade secret protection is no longer available. Of course, trade secret can still protect other AI-related assets, such as source codes, machine learning models, and proprietary data.
In addition, machine learning requires training with massive datasets. The content of the datasets may be from a company’s internal sources and/or public available sources. The dataset may also be from third-party vendors or from sources of unknown origin. If the datasets used to train the AI model contain data that are copyrighted, licensed from third parties, or otherwise subject to confidentiality or use restriction requirements, questions may be raised as to whether the owner or licensor of the datasets have any rights to the inventions generated by such AI model.
Data from external sources used to train an AI system may not be owned by the vendor supplying the data and may be associated with use or licensing restrictions. One can imagine that inventions generated from such an AI system may be subject to royalty obligations. And if multiple AI models are used, the resulting inventions may be subject to multiple royalty obligations.
Further, if the AI system itself is patented, the patent owner may be able to seek royalties that reach through its patent rights and attach to any discoveries made using the AI system. See Bayer AG v. Housey Pharm., Inc., 228 F. Supp.2d 467, 470-71 (D. Del. 2002) (ruling that defendant has not impermissibly conditioned a license of its research tool patent upon royalty provisions covering unpatented products and activities).
A biopharmaceutical company must carefully consider the contract terms with any AI company it engages with for drug development. Issues relating to inventorship and ownership of the data output and the intellectual property rights resulting from the use of AI must be addressed in any agreements with the AI company before a biopharmaceutical company embarks on using AI in drug development.
Further, the biopharmaceutical company should obtain indemnification from AI companies against any future claims of rights to any resulting invention or work. The company should also obtain appropriate representations and warranties from the AI company, including that it has the right to supply and/or license the data without violation of any third-party rights.
Even though AI cannot be an inventor, AI-generated inventions can still infringe third party patents. There may be AI-related patents or patent applications on drug discovery or development that have issued or are pending before the PTO, assuming they satisfy the patent eligibility requirements. As discussed, owners of such patents may demand royalties on inventions that resulted from using the patented AI system. A biopharmaceutical company should conduct an appropriate freedom to operate analysis before using a particular AI model to conduct scientific research.
If confidential information or inventive features are inputted into an AI system, will the input be considered a public disclosure or otherwise become prior art? Is the AI system “learning” from the information inputted into the system, causing them to become available to others using the system? These are thorny issues that courts may soon face.
The transformative potential of AI, encompassing machine learning and generative capabilities, holds great promise for revolutionizing drug discovery and development. This promising horizon is accompanied by intricate challenges. The implications of patent eligibility, inventorship, ownership, duty of disclosure, and freedom to operate in AI-augmented drug development present multifaceted inquiries that demand careful consideration. Striking a balance between harnessing AI’s capabilities while adhering to patent laws necessitates proactive strategies. Addressing inventorship nuances between human innovation and AI contributions, clarifying the boundaries of patent-eligible subject matter, and charting ownership rights and licensing considerations represent pivotal endeavors. Additionally, the evolving concept of a POSA in an AI-integrated landscape and the implications of public disclosures within AI systems call for ongoing legal interpretations. In a landscape marked by collaboration between AI and human ingenuity, a harmonious blend of technological prowess and legal acumen will be pivotal in driving pharmaceutical innovation while safeguarding the rights and responsibilities that underlie the drug development ecosystem. As AI continues its ascent in reshaping drug development, the convergence of innovation and legal foresight will play an indispensable role in shaping the trajectory of this transformative journey.
Image Source: Deposit Photos
Author: ariteguhas@gmail.com
Image ID: 657443750
Share
Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com. Read more.
2 comments so far. Add my comment.
fyi, the generative aspect ALSO destroys the notion of “just using a tool.”
I have provided the thought experiment of a person in a second room opening a black box within which a prior invention has been placed.
That man KNOWS that he did not invent the invention in the black box.
He also may NOT declare that he can be deemed the inventor (because he opens the box and recognizes the invention of another).
Whether or not the actual inventor of the invention that has been placed in the black box MUST be viewed as immaterial to the man in the second room.
I have also further modified the thought experiment with that opening of the black box in the second room being live streamed to millions (as opposed to a single person sitting in that second room). It is quite clear that those viewing the live stream have no legal right to deem themselves to be inventors.
You very much have a Hobson’s choice (and your writing seeks to deny either choice, and in that, falters quite a bit).
As “AI” has become THE buzzword, and several aspects are NOT new (and do not rise to the dilemma that we actually have – a clear focus on the “generative aspect of Generative AI is required.
With that aspect, you have EITHER:
a) created the instance in which no human may rightfully – and correct in a legal sense – claim inventorship of an end item.
OR
b) really only have ‘moved things around” that were already there — the Generative AI ‘really’ then becomes the Person Having Ordinary Skill In The Art (I prefer PHOSITA*** to POSA).
You have no other options, and the Hobsonality here is that the end item lacks the critical Human in the Loop+++.
Damned if you do – damned if you don’t.
*** It behooves the situation to remember that PHOSITA itself is NOT a real human being, but rather a legal fiction that is fully cognizant of the State of the Art. NO real human can be a PHOSITA.
+++ That one MAY have co-inventors that are human does NOT resolve the issue, and any claims that contain ANY elements not traceable to an actual human inventor are thus tainted — from both or fatally one of a non-human inventor standpoint or a legal obviousness standpoint.
And the attempt to say ‘trained with non-public or proprietary data’ does NOT save you, given the instant the non-human result (the generative result) IS published, that reasoning vanishes. You cannot use it (claim it as human invented) and you cannot NOT use it (it IS an aspect of State of the Art).
Your email address will not be published.
October 16, 2023
October 16, 2023
October 16, 2023
October 15, 2023
October 14, 2023
October 13, 2023
At IPWatchdog.com our focus is on the business, policy and substance of patents and other forms of intellectual property. Today IPWatchdog is recognized as the leading sources for news and information in the patent and innovation industries.
© 1999 – 2023 IPWatchdog, Inc.
Images on IPWatchdog Primarily Provided by
Our website uses cookies to provide you with a better experience. Read our privacy policy for more information.Accept and Close