Privacy advocates told members of Congress to proclaim a moratorium on facial recognition technology until the technology matures and thorough civic debate decides the best use of the tech.

Both Democrat and Republican representatives on the House Oversight Committee slammed public and private use of the tech, calling facial recognition technology an invasion of privacy and potentially abusive of First and Fourth Amendment rights.

“I think we need a time out,” said Rep. Jim Jordan (R-Ohio) at a Wednesday hearing on the topic. “No one in an elected position made a decision on this. These 18 states [using this technology] — that’s more than half the population of the country. That is scary. It was just eight years ago the IRS targeted people for their political beliefs. It doesn’t matter what side of the political spectrum you’re on, this should concern us all.”

While some representatives focused more on government use of facial recognition tech — like police use —  others drilled down on private use and what they called “surveillance capitalism.”

When progressive firebrand Rep. Alexandria Ocasio-Cortez (D-N.Y.) asked, “Do you think it’s fair to say Americans are being spied on and surveilled on a massive scale without their consent or knowledge?” the witnesses replied yes.

Clare Garvie, senior associate at Georgetown University’s Center on Privacy & Technology said there is a distinction between how Facebook uses facial recognition tech versus how Amazon uses the tech.

According to the American Civil Liberties Union (ACLU), Amazon aggressively markets its facial recognition software, Rekognition, to law enforcement agencies in Orlando, Florida and Washington County, Oregon. Emails obtained by the ACLU show law enforcement agencies in Arizona and California are also interested in using the tech.

In addition to the intrusion of privacy, the witnesses’ primary concern is facial recognition tech misidentifying American citizens. As witness Joy Buolamwini, founder of the Algorithm Justice League, pointed out, a black man sued Apple in April for misidentifying him as a thief.

This is because facial recognition tech still hasn’t matured, she said, and the data used by the facial recognition algorithms is imperfect and incomplete, resulting in racial biases.

For many facial recognition algorithms, she said, white men are used as the template, which means the error rate for identifying people of color is very high. For black women, the error rate is 37.4 percent.

“When we’re hearing from communities, it’s with a great deal of anxiety about facial recognition,” she said. “It’s been done without community input, without clear roles, and without the right standards to protect the First Amendment and other core values. We should study the harms before we roll something out.”

Law enforcement agencies may be the biggest offenders when misidentifying suspects or criminals. The criminal justice system’s data is notoriously faulty and racially biased, and advocates like the ACLU have condemned other types of algorithms using criminal justice data.

Even if Big Tech companies like Apple, Amazon, Facebook, Google and others continue to perfect their software, they’re still going to produce bad results if the data is bad.

“We must caution against anyone who says these algorithms are getting better and therefore the results are getting better,” Garvie said. “It doesn’t matter how good an algorithm gets, if law enforcement agencies put unreliable or wrong data in, we will get wrong results out. These algorithms are not magic.”

The Partnership on Artificial Intelligence (PAI) — whose members include Amazon, Facebook and Google — released a report in April condemning use of algorithmic tools in the criminal justice system, but Amazon and IBM continue to market their software to local and federal law enforcement agencies, including the FBI.

A few of the witness took a more nuanced position on a facial recognition moratorium.

The University of the District of Columbia’s Professor of Law Andrew Ferguson said facial surveillance should be banned, but there should be regulation for different uses of the tech.

“My opinion is if we’re not going to regulate, we should push the pause button on this tech because it’s dangerous,” he said.

Cedric Alexander, former president of the National Organization of Black Law Enforcement Executives, thinks law enforcement agencies could use facial recognition tech responsibly, but right now too may agencies are rushing to use the tech without stopping to consider safeguards or potential abuses.

“What I’m more concerned about is the failed use and misuse of the tech, and how we differentiate when it’s being used correctly and when it’s not,” he said. “The police end up being the end user of this technology. If I’m not properly trained or supervised, there’s no transparency about how and when this tech is being utilized, then I, the police department, ends up being the bad guy. And that’s one thing the police don’t need in this environment when we’re trying to build relationships between police and community. I need to make sure there’s ethics, morals, standards, good practices we know and feel good about.”

Representatives from both parties called for bipartisan legislation on the issue. Garvie said much of the work on facial recognition tech relies at least in part on federal grant money, so Congress could influence how companies use and market the tech.

“By and large there are no rules around this,” she said.

Follow Kate on Twitter