Noticias

What is artificial intelligence? Legislators are still looking for a … – Colorado Newsline

ChatGPT (Colorado Newsline illustration)This story originally appeared at Stateline.Back in March, Hawaii state Sen. Chris Lee introduced legislation urging

What is artificial intelligence? Legislators are still looking for a … – Colorado Newsline

ChatGPT (Colorado Newsline illustration)
This story originally appeared at Stateline.
Back in March, Hawaii state Sen. Chris Lee introduced legislation urging the U.S. Congress to consider the benefits and risks of artificial intelligence technologies.
But he didn’t write it. Artificial intelligence did.
Lee instructed ChatGPT, an AI-powered system trained to follow instructions and carry out conversations, to write a piece of legislation that highlights the potential benefits and drawbacks of AI. Within moments, it produced a resolution. Lee copied and pasted the entire text without changing a word.
The resolution was adopted in April with bipartisan support.
“It was making a statement that using AI to write legislation — a whole law — was perhaps the single biggest thing we could do to demonstrate what the good and the bad of AI could be,” Lee, a Democrat, said in an interview with Stateline.
GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX
ChatGPT, which has received reams of national coverage this year, is only one example of artificial intelligence. AI can refer to machine learning, in which companies use algorithms that mimic the way humans learn and carry out tasks. AI also can refer to automated decision-making. More broadly, the words “artificial intelligence” can conjure images of robots.
While organizations and experts have tried to define artificial intelligence, there is no consensus on a single definition. That leaves individual states grappling with how to understand the technology so they can put rules in place.
“There’s no silver-bullet solution that anybody has, to figure out what to do next,” Lee said.
The lack of a uniform definition is challenging legislators trying to craft regulations for the growing technology, according to a report from the National Conference of State Legislatures. The report comes from the NCSL Task Force on Artificial Intelligence, Cybersecurity and Privacy, composed of legislators from about half the states.
Many states already have passed laws to study or regulate artificial intelligence. In 2023, lawmakers in at least 24 states and the District of Columbia introduced bills related to AI, and at least 14 states adopted resolutions or enacted legislation, according to an analysis from the national legislative group.
Some, such as Texas and North Dakota, established groups to study artificial intelligence. Others, among them Arizona and Connecticut, tackled the use of artificial intelligence systems within state government entities. In Colorado, state Sen. Robert Rodriguez told Newsline in June he was in the early stages of developing possible AI regulations that could be introduced in the next session of the Colorado General Assembly.
Connecticut’s new law, which will require the state to regularly assess its systems that contain AI, defines artificial intelligence in part as “an artificial system” that performs tasks “without significant human oversight or can learn from experience and improve such performance when exposed to data sets.”
But every state that defines AI in its legislation does so differently. For instance, Louisiana in a resolution this year said that artificial intelligence “combines computer science and robust datasets to enable problem-solving measures directly to consumers.”
“I think the definition is just so gray because it’s such a broad and expanding area that people do not generally understand,” Lee said.
AI is a tricky subject, but Rhode Island state Rep. Jennifer Stewart, a Democrat who sits on the state’s House Innovation, Internet and Technology Committee, said the uncertainty shouldn’t stop legislators from moving forward.
“I’m of the opinion that we can regulate and harness what we’ve created,” she said. “And we shouldn’t be nervous or scared about wading into these waters.”
The National Artificial Intelligence Initiative Act of 2020 sought to define AI, describing it as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments,” according to the federal law, which was enacted Jan. 1, 2021.
President Joe Biden’s Blueprint for an AI Bill of Rights, a set of guiding principles developed by the White House for the use of automated systems, extends the definition to “automated systems that have the potential to meaningfully impact the American public’s rights, opportunities or access to critical resources or services.”
The European Union, Google, a trade group known as BSA | The Software Alliance and many more entities have spelled out similar but differing definitions for artificial intelligence. But AI experts and legislators are still identifying a conclusive definition — and weighing whether a concrete definition is even necessary for pursuing a regulatory framework.
At the most basic level, artificial intelligence refers to machine-based systems that produce an outcome based on information inputted to it, said Sylvester Johnson, associate vice provost for public interest technology at Virginia Tech.
However, various AI programs work based on how these systems have been trained to use data, which, Johnson said, legislators need to know.
“AI is very fast moving,” he said. “If you really want the people who make policy and legislative assemblies at the federal level or state levels to be richly informed, then you need an ecosystem that is designed to provide some kind of concise and precise way of updating people about trends and changes that are happening in the technology.”
Deciding how broad the definition of AI should be is a significant challenge, said Jake Morabito, the director of the Communications and Technology Task Force at the American Legislative Exchange Council. ALEC, a conservative public policy organization, supports free market solutions and the enforcement of existing regulations that could cover various uses of AI.
The “light touch” approach to regulating AI would help the United States become a leader in technology on the global stage, but given the fervor over ChatGPT and other systems, legislators at all levels should be studying its developments for better understanding, Morabito said.
“I just think this technology’s out of the bag, and we can’t put it back in the bottle,” Morabito said. “We need to fully understand it. And I think lawmakers can do a lot to get up to speed on understanding how we can maximize the benefits, mitigate the risks and make sure that this technology is developed on our shores and not abroad.”
Some experts think legislators don’t need a definition to govern artificial intelligence. When it comes to an application of artificial intelligence — a specific area where AI is being used — a definition isn’t entirely required, argued Alex Engler, a fellow in governance studies at the Brookings Institution.
Instead, he said, a core set of rules should apply to any program that uses automated systems, no matter the purpose.
“You can basically say, ‘I don’t care what algorithm you’re using, you have to meet these criteria,’” Engler said. “Now, that isn’t to say there’s literally no definition, it just means that you’re not counting some algorithms in and others out.”
Focusing on the specific systems, such as generative AI that’s capable of creating text or images, may be the wrong approach, he said.
The core question, Engler said, is this: “How do we update our civil society and our consumer protections so that people still have them in an algorithmic era?”
Legislation some states passed over the last few years has attempted to answer the question. While Kentucky isn’t at the forefront — the state’s legislature just recently created new committees focused on technology — state Sen. Whitney Westerfield, a Republican and member of the NCSL’s AI task force, said the “avalanche of bills” nationwide is because people are scared.
AI technology is not new, but now that the topic is in the spotlight, the public — and legislators — are beginning to respond, he noted.
“When they’ve [legislators] gotten a legislative hammer in their hand, everything’s a nail,” Westerfield said. “And if there’s a story that pops up about this, that or the other, it doesn’t even have to affect their constituents, I think that just adds more fuel to the fire.”
The potential harms that come with using artificial intelligence are creating momentum for more regulation. For example, some AI tools can produce tangible harm by replicating human biases, yielding decisions or actions that favor certain groups over others, said Megan Price, executive director of the Human Rights Data Analysis Group.
The nonprofit group applies data science to analyze human rights violations worldwide. Price has designed several methods for statistical analysis of human rights data, which have aided her work estimating the number of conflict-related deaths in Syria. The organization also uses artificial intelligence in some of its own systems, she said.
The potential implications of artificial intelligence and its power have created an appropriate sense of urgency among legislators, Price said. And weighing the potential harms and uses, like her team does, is crucial.
“And so, the question really is when a mistake is made, what is the cost and who pays it?” she asked.
A new focus on social justice in technology is also worth noting, Virginia Tech’s Johnson said. “Public interest technology” is a growing movement among social justice groups that’s focused on how artificial intelligence can work for public good and public benefit.
“I think if there’s a reason to be hopeful about actually advancing our ability to regulate technology in a way that improves people’s lives, and their outcomes, this [public interest technology] is the way to go,” Johnson said.
Here is Hawaii state Sen. Chris Lee’s resolution:
Stateline is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Stateline maintains editorial independence. Contact Editor Scott S. Greenberger for questions: [email protected]. Follow Stateline on Facebook and Twitter.
SUPPORT NEWS YOU TRUST.
by Madyson Fitzgerald, Colorado Newsline
October 6, 2023
by Madyson Fitzgerald, Colorado Newsline
October 6, 2023
This story originally appeared at Stateline.
Back in March, Hawaii state Sen. Chris Lee introduced legislation urging the U.S. Congress to consider the benefits and risks of artificial intelligence technologies.
But he didn’t write it. Artificial intelligence did.
Lee instructed ChatGPT, an AI-powered system trained to follow instructions and carry out conversations, to write a piece of legislation that highlights the potential benefits and drawbacks of AI. Within moments, it produced a resolution. Lee copied and pasted the entire text without changing a word.
The resolution was adopted in April with bipartisan support.
“It was making a statement that using AI to write legislation — a whole law — was perhaps the single biggest thing we could do to demonstrate what the good and the bad of AI could be,” Lee, a Democrat, said in an interview with Stateline.
GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX
ChatGPT, which has received reams of national coverage this year, is only one example of artificial intelligence. AI can refer to machine learning, in which companies use algorithms that mimic the way humans learn and carry out tasks. AI also can refer to automated decision-making. More broadly, the words “artificial intelligence” can conjure images of robots.
While organizations and experts have tried to define artificial intelligence, there is no consensus on a single definition. That leaves individual states grappling with how to understand the technology so they can put rules in place.
“There’s no silver-bullet solution that anybody has, to figure out what to do next,” Lee said.
The lack of a uniform definition is challenging legislators trying to craft regulations for the growing technology, according to a report from the National Conference of State Legislatures. The report comes from the NCSL Task Force on Artificial Intelligence, Cybersecurity and Privacy, composed of legislators from about half the states.
Many states already have passed laws to study or regulate artificial intelligence. In 2023, lawmakers in at least 24 states and the District of Columbia introduced bills related to AI, and at least 14 states adopted resolutions or enacted legislation, according to an analysis from the national legislative group.
Some, such as Texas and North Dakota, established groups to study artificial intelligence. Others, among them Arizona and Connecticut, tackled the use of artificial intelligence systems within state government entities. In Colorado, state Sen. Robert Rodriguez told Newsline in June he was in the early stages of developing possible AI regulations that could be introduced in the next session of the Colorado General Assembly.
Connecticut’s new law, which will require the state to regularly assess its systems that contain AI, defines artificial intelligence in part as “an artificial system” that performs tasks “without significant human oversight or can learn from experience and improve such performance when exposed to data sets.”
But every state that defines AI in its legislation does so differently. For instance, Louisiana in a resolution this year said that artificial intelligence “combines computer science and robust datasets to enable problem-solving measures directly to consumers.”
“I think the definition is just so gray because it’s such a broad and expanding area that people do not generally understand,” Lee said.
AI is a tricky subject, but Rhode Island state Rep. Jennifer Stewart, a Democrat who sits on the state’s House Innovation, Internet and Technology Committee, said the uncertainty shouldn’t stop legislators from moving forward.
“I’m of the opinion that we can regulate and harness what we’ve created,” she said. “And we shouldn’t be nervous or scared about wading into these waters.”
The National Artificial Intelligence Initiative Act of 2020 sought to define AI, describing it as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments,” according to the federal law, which was enacted Jan. 1, 2021.
President Joe Biden’s Blueprint for an AI Bill of Rights, a set of guiding principles developed by the White House for the use of automated systems, extends the definition to “automated systems that have the potential to meaningfully impact the American public’s rights, opportunities or access to critical resources or services.”
The European Union, Google, a trade group known as BSA | The Software Alliance and many more entities have spelled out similar but differing definitions for artificial intelligence. But AI experts and legislators are still identifying a conclusive definition — and weighing whether a concrete definition is even necessary for pursuing a regulatory framework.
At the most basic level, artificial intelligence refers to machine-based systems that produce an outcome based on information inputted to it, said Sylvester Johnson, associate vice provost for public interest technology at Virginia Tech.
However, various AI programs work based on how these systems have been trained to use data, which, Johnson said, legislators need to know.
“AI is very fast moving,” he said. “If you really want the people who make policy and legislative assemblies at the federal level or state levels to be richly informed, then you need an ecosystem that is designed to provide some kind of concise and precise way of updating people about trends and changes that are happening in the technology.”
Deciding how broad the definition of AI should be is a significant challenge, said Jake Morabito, the director of the Communications and Technology Task Force at the American Legislative Exchange Council. ALEC, a conservative public policy organization, supports free market solutions and the enforcement of existing regulations that could cover various uses of AI.
The “light touch” approach to regulating AI would help the United States become a leader in technology on the global stage, but given the fervor over ChatGPT and other systems, legislators at all levels should be studying its developments for better understanding, Morabito said.
“I just think this technology’s out of the bag, and we can’t put it back in the bottle,” Morabito said. “We need to fully understand it. And I think lawmakers can do a lot to get up to speed on understanding how we can maximize the benefits, mitigate the risks and make sure that this technology is developed on our shores and not abroad.”
Some experts think legislators don’t need a definition to govern artificial intelligence. When it comes to an application of artificial intelligence — a specific area where AI is being used — a definition isn’t entirely required, argued Alex Engler, a fellow in governance studies at the Brookings Institution.
Instead, he said, a core set of rules should apply to any program that uses automated systems, no matter the purpose.
“You can basically say, ‘I don’t care what algorithm you’re using, you have to meet these criteria,’” Engler said. “Now, that isn’t to say there’s literally no definition, it just means that you’re not counting some algorithms in and others out.”
Focusing on the specific systems, such as generative AI that’s capable of creating text or images, may be the wrong approach, he said.
The core question, Engler said, is this: “How do we update our civil society and our consumer protections so that people still have them in an algorithmic era?”
Legislation some states passed over the last few years has attempted to answer the question. While Kentucky isn’t at the forefront — the state’s legislature just recently created new committees focused on technology — state Sen. Whitney Westerfield, a Republican and member of the NCSL’s AI task force, said the “avalanche of bills” nationwide is because people are scared.
AI technology is not new, but now that the topic is in the spotlight, the public — and legislators — are beginning to respond, he noted.
“When they’ve [legislators] gotten a legislative hammer in their hand, everything’s a nail,” Westerfield said. “And if there’s a story that pops up about this, that or the other, it doesn’t even have to affect their constituents, I think that just adds more fuel to the fire.”
The potential harms that come with using artificial intelligence are creating momentum for more regulation. For example, some AI tools can produce tangible harm by replicating human biases, yielding decisions or actions that favor certain groups over others, said Megan Price, executive director of the Human Rights Data Analysis Group.
The nonprofit group applies data science to analyze human rights violations worldwide. Price has designed several methods for statistical analysis of human rights data, which have aided her work estimating the number of conflict-related deaths in Syria. The organization also uses artificial intelligence in some of its own systems, she said.
The potential implications of artificial intelligence and its power have created an appropriate sense of urgency among legislators, Price said. And weighing the potential harms and uses, like her team does, is crucial.
“And so, the question really is when a mistake is made, what is the cost and who pays it?” she asked.
A new focus on social justice in technology is also worth noting, Virginia Tech’s Johnson said. “Public interest technology” is a growing movement among social justice groups that’s focused on how artificial intelligence can work for public good and public benefit.
“I think if there’s a reason to be hopeful about actually advancing our ability to regulate technology in a way that improves people’s lives, and their outcomes, this [public interest technology] is the way to go,” Johnson said.
Here is Hawaii state Sen. Chris Lee’s resolution:
Stateline is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Stateline maintains editorial independence. Contact Editor Scott S. Greenberger for questions: info@stateline.org. Follow Stateline on Facebook and Twitter.
SUPPORT NEWS YOU TRUST.
Colorado Newsline is part of States Newsroom, a network of news bureaus supported by grants and a coalition of donors as a 501c(3) public charity. Colorado Newsline maintains editorial independence. Contact Editor Quentin Young for questions: info@coloradonewsline.com. Follow Colorado Newsline on Facebook and Twitter.
Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our web site. Please see our republishing guidelines for use of photos and graphics.
Madyson Fitzgerald is the newsletter producer and breaking news reporter for Stateline.
DEMOCRACY TOOLKIT
© Colorado Newsline, 2023
Colorado Newsline provides fair and accurate reporting on politics, policy and other stories of interest to Coloradans. Newsline is based in Denver, and coverage of activities at the Capitol are central to its mission, but its reporters are devoted to providing reliable information about topics that concern readers in all parts of the state, from Lamar to Dinosaur, from Durango to Sterling.
DEIJ Policy | Ethics Policy | Privacy Policy
Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our web site.

source

About Author

4tune

Leave a Reply

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *