What algorithms audit startups need to be successful

0 27

How open banking is the engine of enormous innovation

Find out how forward-thinking fintechs and FIs are accelerating personalized financial products with data-rich APIs.

For clarity and to avoid potential damage, algorithms that impact human lives should ideally be reviewed by an independent body before deployment, just as environmental impact reports must be approved before a construction project. can start. While no such legal requirement for AI exists in the United States, a number of startups have been created to fill a void in algorithm auditing and risk assessment.

A third party trusted by the public and potential customers could increase trust in AI systems in general. As AI startups in aviation and autonomous driving have argued, regulation could enable innovation and help businesses, governments, and individuals adopt AI safely.

In recent years, we have seen proposals for many laws that support algorithm audits by an external company, and last year, dozens of influential members of the AI ​​community from academia, the industry and civil society have recommended external algorithm audits as a way to implement AI principles.

Like the consulting firms that help businesses scale AI deployments , offer data monitoring services, and sort unstructured data, algorithm auditing startups occupy a niche in the growing industry. of AI. But recent events surrounding HireVue seem to illustrate how these companies differ from other AI startups.

HireVue is currently used by over 700 companies, including Delta, Hilton and Unilever, for a pre-defined, personalized assessment of job applicants based on a CV, video interview, or their performance in psychometric games.

Two weeks ago, HireVue announced that it would no longer use facial analysis to determine if a person is fit for a job. You may ask yourself: how could recognizing a person's facial features be considered a scientifically verifiable means of concluding that they are qualified for a job? Well, HireVue never really proved these results, but this claim raised a lot of questions.

A HireVue executive said in 2019 that 10% to 30% of skill scores could be related to facial analysis. But the report at the time called the company's claim “ deeply disturbing .” Before the Utah-based company decided to ditch facial analysis, ethics leader Suresh Venkatasubramanian resigned from a HireVue advisory board . And the Electronic Privacy Information Center filed a complaint with the Federal Trade Commission (FTC) alleging that HireVue engaged in unfair and deceptive business practices in violation of the FTC Act. The complaint specifically cites studies that have shown that facial recognition systems can identify emotions differently depending on a person's race. The complaint also pointed to a documented history of facial recognition systems identifying dark-skinned women , people who do not conform to a binary Asian American gender identity .

Facial analysis may not identify individuals - as facial recognition technology would - but as the AI ​​Partnership puts it , facial analysis can classify characteristics with "more cultural, social and political implications." complex ”, such as age, race or gender.

Despite these concerns, in a press release announcing the results of its audit, HireVue said: "The audit concluded that 'the [HireVue] evaluations are working as advertised with respect to issues of fairness and bias." The audit was performed by O'Neil Risk Consulting and Algorithmic Auditing ( ORCAA ), which was created by data scientist Cathy O'Neil. O'Neil is also the author of the book Destruction of Maths Weapons , which takes a critical look at the impact of algorithms on society.

The audit report does not contain any analysis of the data or training code from the AI ​​system, but rather conversations about the types of damage HireVue's AI could cause when performing pre-defined assessments of candidates for a job at the start of a career on eight skills measures.

The ORCAA audit asked questions of company teams and external stakeholders, including those invited to take a test using HireVue software and companies who pay for company services.

After signing a legal agreement, you can read the eight-page audit document for yourself. He indicates that by the time ORCAA carried out the audit, HireVue had already decided to start phasing out facial analysis.

The audit also expresses a concern among stakeholders that visual analysis makes people generally uncomfortable. And one participant in a stakeholder interview expressed concern that HireVue facial scan might work differently for people wearing headgear or face masks and disproportionately signal their request for a human review. Last fall, VentureBeat reported that people with dark skin taking the status bar exam with remote monitoring software expressed similar concerns.

The work of Brookings Institution Fellow Alex Engler focuses on issues of AI governance. In an Editorial to Fast Company this week , Engler wrote that he believes HireVue misinterpreted the audit results to engage in some form of ethical washdown and described the company as more interested in "a press favorable only through legitimate introspection ”. He also called audit startups algorithms a "booming but struggling industry" and called for government oversight or regulation to keep audits fair.

HireVue CEO Kevin Parker told VentureBeat the company began phasing out the use of facial analysis about a year ago. He said HireVue came to the decision following negative media coverage and an internal assessment which concluded that "the benefit of including it was not sufficient to justify the concern it provoked. "

Alex Engle is right: Algorithmic auditing companies like mine risk becoming corrupt.

We need more leverage to get it right, with open methodology and results.

Where could we get such leverage? Lawsuits, enforcement or both. https://t.co/2zkgFs4YEo

- Cathy O'Neil (@mathbabedotorg) January 26, 2021

Parker disputes Engler's claim that HireVue misinterpreted the audit results and expressed pride in the result. But one thing Engler, HireVue and ORCAA agree on is the need for industry-wide changes.

“Having a standard that says 'This is what we mean when we talk about algorithmic auditing' and what it covers and what it says intent would be very helpful, and we look forward to participating in it and see those standards come out. Whether it's regulatory or industry, I think this will all help, ”Parker said.

So what kind of government regulation, industry standards, or internal trade policy is needed for algorithm auditing startups to be successful? And how can they maintain their independence and avoid being co-opted like some research on the ethics of AI and diversity in tech initiatives has done in recent years?

To find out, VentureBeat spoke with representatives of bnh.ai , Parité and ORCAA, startups offering algorithm audits to commercial and government clients.

Require companies to perform algorithm audits

One solution endorsed by the people working in each of the three companies was to enact regulations requiring algorithm audits, particularly for algorithms that inform decisions that have a significant impact on people's lives.

“I think the final answer is federal regulation, and we've seen it in the banking industry,” said Patrick Hall, chief scientist at bnh.ai and visiting professor at George Washington University. The Federal Reserve SR-11 Model risk management guidelines currently require audits of statistical and machine learning models, which Hall sees as a step in the right direction. The National Institute for Standards and Technology (NIST) is testing facial recognition systems formed by private companies, but it's a voluntary process.

ORCAA chief strategist Jacob Appel said an algorithm audit is currently defined as anything a selected algorithm auditor offers. He suggests that companies be required to disclose algorithm audit reports in the same way that publicly traded companies are required to share their financial statements. It's laudable that companies undertake a rigorous audit when there is no legal obligation to do so, but Appel said the voluntary practice reflects a lack of oversight in the current regulatory environment.

“If there are any complaints or criticisms about how HireVue's audit results were published, I think it's helpful to see the link to the lack of legal standards and regulatory requirements as contributing to these results, ”he said. "These early examples can help highlight or underscore the need for an environment where there are legal and regulatory requirements that give auditors a little more momentum."

There are growing signs that external algorithm audits may become the norm. Lawmakers in parts of the United States have proposed legislation that would effectively create markets for algorithm auditing startups. In New York, lawmakers proposed to mandate an annual recruitment test for software using AI. California voters last fall rejected Proposition 25, which would have required counties to replace cash bond systems with algorithmic valuation. The Bill No. 36 of the Senate requires an external review of risk assessment algorithms before trial by an independent third party. In 2019, federal lawmakersintroduced the Algorithmic Liability Act requiring companies to investigate and correct algorithms that result in discriminatory or unfair treatment.

However, any regulatory requirement will need to consider how to measure the fairness and influence of third-party AI, as few AI systems are built entirely in-house.

Rumman Chowdhury is CEO of Parity, a company she founded a few months ago after stepping down as global head of AI responsible at Accenture. She believes that such regulation should take into account the fact that use cases can vary widely from industry to industry. She also believes the legislation should address the intellectual property claims of AI startups unwilling to share data or training code, a concern these startups often raise in court proceedings.

“I think the challenge here is to balance transparency with the very real and tangible need for companies to protect their intellectual property and what they are building,” she said. "It's unfair to say that companies should be forced to share all of their data and models because they have intellectual property that they're building, and you could audit a startup."

Maintain independence and increase public confidence

To avoid co-opting the startup space audit algorithm, Chowdhury said it would be essential to establish common professional standards through groups like the IEEE or government regulation. Any application or standard could also include a government mandate that auditors receive some form of training or certification, she said.

Appel suggested that another way to improve public reliability and expand the community of stakeholders affected by the technology is to impose a public comment period for algorithms. These periods are usually invoked before legislative or policy proposals or civic efforts like proposed construction projects.

Other governments have started to implement measures to increase public confidence in algorithms. The cities of Amsterdam and Helsinki created algorithm registers at the end of 2020 to give local residents the name of the person and the city service in charge of the deployment of a particular algorithm and to provide feedback.

Define audits and algorithms

A language model with billions of parameters is different from a simpler algorithmic decision-making system made without a qualitative model. Algorithm definitions may be needed to help define what an audit should contain, as well as to help companies understand what an audit should accomplish.

"I think regulations and standards need to be very clear about what is expected of an audit, what it has to accomplish so that companies can say, 'This is what an audit cannot. do and that's what he can do ”. It helps manage expectations, I think, ”Chowdhury said.

A culture change for humans working with machines

Last month, a group of AI researchers called for a culture change in the computer vision and NLP communities . An article they published examines the implications of a culture shift for data scientists within companies. Suggestions from researchers include improvements in data documentation practices and audit trails through documentation, procedures and processes.

Chowdhury also suggested that people in the AI ​​industry are looking to learn from structural issues that other industries have already faced.

Examples of this include the recently launched AI Incident Database , which borrows an approach used in aviation and IT security. Created by the AI ​​Partnership, the database is a collaborative effort to document instances in which AI systems fail. Others have suggested that the AI ​​industry encourage bias in networks and the security industry does so with bug bounties.

“I think it's really interesting looking at things like bug bounties and crash reporting databases because it allows companies to be very public about vulnerabilities in their systems in a way that we are all working to correct them instead of pointing fingers at them because it was wrong, ”she said. “I think the way to be successful is with an audit that can't happen after the fact - it should happen before something is released.”

Don't think of an audit as a panacea

As ORCAA's audit of a HireVue use case shows, disclosure of an audit may be limited and does not necessarily guarantee that AI systems are free from bias.

Chowdhury said that a disconnect she frequently encounters with clients is an expectation that an audit will only consider code or data analysis. She said audits can also focus on specific use cases, such as gathering information from marginalized communities, managing risk, or critically examining corporate culture.

“I think there is an idealistic idea of ​​what an audit will accomplish. An audit is just a report. It will not solve everything and will not even identify all the problems, ”she said.

Andrew Burt, chief executive of Bnh.ai, said clients tended to view audits as a panacea rather than part of an ongoing process of monitoring the performance of algorithms in practice.

“Spot audits are useful, but only up to a point, because of the way AI is implemented in practice. The underlying data changes, the models themselves can change, and the same models are frequently used for secondary purposes, all of which require periodic review, ”said Burt.

Consider the risk beyond what is legal

Audits to ensure compliance with government regulations may not be sufficient to detect potentially costly risks. An audit can prevent a company from going to court, but it is not always the same as keeping up with changing ethical standards or managing the risk that unethical or irresponsible actions pose to them. business results.

“I think there should be an aspect of algorithmic auditing that is not only about compliance, but also ethical and responsible use, which is also an aspect of risk management, such as risk of reputation is a consideration. You absolutely can do something legal that everyone thinks is terrible, ”Chowdhury said. “There is one aspect of algorithmic auditing that should include what the impact on society is in terms of the impact on your company's reputation, and that actually has nothing to do with the law. In fact, what else is beyond the law? "

Final thoughts

In the current environment of algorithm auditing startups, Chowdhury said she worries that companies savvy enough to understand the political implications of inaction will try to co-opt the audit process and steal the narrative. She also fears that startups under pressure to increase their income will sign less than robust audits.

"As much as I would like to believe that everyone is a good actor, not everyone is a good actor, and there is certainly some harm in essentially offering an ethical wash to companies under the guise of auditing. algorithmic, ”she said. “Because it's a bit of a Wild West territory when it comes to what it means to do an audit, it's anyone's game. And unfortunately, when it's anyone's game and the other actor isn't incentivized to play at the highest level, we're going to go down to the lowest denominator, that's my fear.

Senior officials from the Biden administration of the FTC, the Justice Department, and the White House Office of Science and Technology have all signaled plans to increase the regulation of AI, and a Congress democracy could tackle a range of technology policy issues . Internal audit frameworks and risk assessments are also options. The OECD and Data & Society are currently developing risk assessment classification tools that companies can use to determine whether an algorithm should be considered high or low risk.

But algorithm auditing startups are different from other AI startups in that they must seek approval from an independent arbiter and, to some extent, from the general public. To ensure their success, the folks behind algorithm auditing startups, like the ones I spoke with, are increasingly suggesting stricter industry-wide regulations and standards.

1
$ 0.00

Comments