Jump to content

Google and Microsoft shouldn’t decide how technology is regulated


steven36

Recommended Posts

A researcher who studies AI principles warns that giving too much credence to Big Tech is like “asking the fox for guidance on henhouse security procedures.”

 

134220406_157978460780582332.jpg

 

When it comes to AI, Big Tech wants a hand in developing regulation. In a January 20 piece for the Financial Times calling for the regulation of the technology, Google CEO Sundar Pichai argued that his company’s artificial intelligence principles could be used as a template for future laws. Brad Smith of Microsoft said the same in a talk at the World Economic Forum earlier this week.

 

Google and Microsoft are right that it’s time for government to step in and provide safeguards, and that regulation should build on the important thinking that’s already been done. However, looking only to the perspectives of large tech companies, who’ve already established themselves as dominant players, is asking the fox for guidance on henhouse security procedures. We need to take a broader view.

 

Rapid advances in machine learning technology, which falls under the general umbrella of AI, have lent urgency to questions about how to build and use AI systems responsibly, safely, and ethically. Criminal justice algorithms are racially biased, autonomous vehicles have been involved in fatal crashes, and algorithmic content moderation has contributed to a wave of disinformation efforts. The task of ensuring AI actually supports human rights and well-being has at times felt overwhelming, the questions unanswerable.

 

That hasn’t stopped a lot of people from trying to answer them. Alongside Google’s and Microsoft’s, there have been principles for ethical AI from national governments and intergovernmental organizations, advocacy organizations, expert groups, and more.

 

Over the past year, I worked with a team of researchers to analyze AI principles from around the world, trying to see what they might have in common. We coded each principle in the 36 documents we ended up focusing on and uncovered eight key themes:

 

  • Fairness and nondiscrimination: AI systems shouldn’t reinforce social inequality—instead, they should promote inclusivity.
  • Accountability: Developers should plan for their technology’s impacts. Monitoring and auditing mechanisms need to be in place, and impacted individuals and populations should have access to adequate remedies.
  • Privacy: AI should respect privacy, both in sourcing the data that is used for development and in giving people agency over when and how their personal info is used to make decisions about them.
  • Transparency and explainability: We should know where AI systems are being used and how they reach the decisions they do.
  • Safety and security: AI systems should be tested to ensure they perform as intended and resist interference from unauthorized parties.
  • Professional responsibility: The people involved in the development and deployment of AI systems have an obligation to prioritize integrity, collaboration, professionalism, and foresight.
  • Human control of technology: To promote trust and respect autonomy, there should be human checks on AI, from review of important decisions to fail-safe mechanisms that kick in for extenuating circumstances.
  • Promotion of human values: We should be guided by our core values and the well-being of all humanity when we design and deploy AI systems.

 

The coherence of these various principles documents—from different regions and interest groups—suggests that social norms for AI are emerging.

 

Law and regulation originate in social norms, which makes Microsoft and Google correct to posit that these near-universal themes among AI principles are a good starting point for regulation.

 

However, as we note in our paper, there’s a wide and thorny gap between being able to articulate goals for AI such as fairness, transparency, and safety, and writing rules that would govern the thousands of decisions, big and small, that result in any given technology being built and used responsibly.

 

One way to register that gap is to recognize the very divergent visions different organizations advance within these themes. For example, every document we looked at included some version of a fairness or nondiscrimination principle. But they call for different implementations. Some focus, for example, on forbidding the use of biased datasets—even though arguments that truly unbiased data don’t exist are pretty persuasive. Others call for greater diversity on development teams to ensure that a broader range of perspectives is baked into technologies from the start.

 

Still others want to see AI used to uncover and remedy existing instances of discrimination. Regulators would need to parse these options carefully and decide which were appropriate.

 

In all, if advances in AI technology have landed us in unfamiliar territory, an analysis of AI principles might be the map we need to make sense of it all. But that’s only true if we look outside U.S.-based tech companies’ visions for the regulations that would best serve them. Principles from a broader range of stakeholders provide visibility into everything from the greatest risks that AI poses to vulnerable and marginalized populations to the key human values, such as self-determination, equality, and sustainability, that we should be seeking to protect.

 

AI principles are a map that should be on the table as regulators around the world draw up their next steps. However, even a perfect map doesn’t make the journey for you. At some point—and soon—policymakers need to set out the real-world implementations that will ensure that the power of AI technology reinforces the best, and not the worst, in humanity.

 

Source

 

Link to comment
Share on other sites


  • Replies 2
  • Views 656
  • Created
  • Last Reply
12 minutes ago, steven36 said:

“asking the fox for guidance on henhouse security procedures.”

Indeed.  Even our common sense warns us against doing this. However regulation is legislated, it must be done according to citizenry's wishes, which is the essential promise of a democratic form of government.

Link to comment
Share on other sites


1 hour ago, aum said:

it must be done according to citizenry's wishes, which is the essential promise of a democratic form of government.

That's not the way it works in the real world , If citizens don't protest it (sit back and say nothing) and all parties in goverment can  agree then regulation gets passed thats in a democratic form of government. In some governments were they dont really have a elected goverment what the citizens want don't mean a thing  they pass laws and the citizens must follow them  and they dont have any say in the matter . But what the people want and the goverment want dont have nothing to do with what  the private big tech companies want , they will always  made regulation to suit only themselves for there own personal profit and to make them bullet proof against legislation .So they should never be able to dictate how goverment  regulation is made !!!

 

If it's a matter of national security citizens no were really have any say in the matter  they only think they do. Even if laws get loosen the next time  they is a war they use there national security powers to make  new laws  that over power the old ones and citizens dont have any say in that matter. One step forward , and two steps back is how it works.

 

Here is some examples . Do you think the people or big tech wanted  to repeal net neutrally ? No ,the goverment did because they was paid off by Big telco . If they start  letting Big tech make the rules it means  they was most likely bribed .  Do  you think every one and big tech wanted EU copyright  reform or the GDPR ? No it cost them billions of dollars  and cause some people problems. They dont always get what they want and they should not  have the power to regulate AI  ether  that would be dangerous .

 

Big Tech try to write there own fate and history  every since they existed  but time and time again  the govermet writes it for them.

 

When  it comes to people and big tech  people are naive  and are just sheep that follow trends regardless of  the problems it causes and even if they complain if they no laws  on the matter then big tech is not held liable and if big tech makes the laws than they still not liable and can turn AI into another monopoly . Google and Microsoft dont  really like each other they  just use each other to profit .

Link to comment
Share on other sites


Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...