As federal lawmakers continue to look for ways to regulate Big Tech, the industry continues to deny the gravity of popularized problems like privacy and technology addiction.

At a hearing hosted by the U.S. Senate Committee on Commerce, Science and Transportation, Sen. Brian Schatz (D-Hawaii) grilled Google over its alleged use of “persuasive technology.”

Persuasive technology is “the idea that computers, mobile phones, websites, and other technologies could be designed to influence people’s behavior and even attitudes,” according to Nanette Byrnes at MIT Technology Review, who wrote in 2015 that “many companies are using technologies that measure customer behavior to design products that are not just persuasive but specifically aimed at forging new habits.”

In the committee hearing, Sen. John Thune (R-S.D.) said he planned to use the hearing to inform legislation “to require internet platforms to give consumers the option to engage w/the platform without having the experience shaped by algorithms.”

“As online content continues to grow, online companies rely increasingly on artificial intelligence (AI) powered automations to display content to optimize engagement. Unfortunately the use of AI algorithms can have an unintended and possibly even dangerous downside,” Thune said, referencing recent news reports from Bloomberg and The New York Times revealing how YouTube “chased” engagement and clicks rather than protecting users by recommending a video of children playing in a pool to users who had been watching sexually explicit videos.

Senators drilled down on how tech giants who rely on algorithms to curate content negatively impact users and American civic engagement as a whole. As several witnesses explained, because algorithms determine so many tech giants’ decisions to take down or promote content, Big Tech absconds responsibility for negative results — like political polarization or censorship.

“Silicon Valley has a premise, that society would be better–more efficient, smarter, more frictionless–if we eliminate steps of human judgment,” Schatz said. “But computers recommended these awful videos in the first place. Companies are letting these algorithms run wild and are leaving humans to clean up the mess. Algorithms are amoral. They eliminate human judgment as part of their business models. We need them to be more transparent and companies need to be more accountable about the outcomes they produce.”

Tristan Harris, co-founder and executive director of the Center for Humane Technology, told senators this phenomenon is “not happening by accident, it’s happening by design.”

“In the race of attention, companies have to get more of it by becoming more and more effective,” he said. “Companies compete on whose algorithms more accurately predict what will keep users there the longest. Because YouTube wants to maximize watch time, it tilts the entire ant colony of humanity towards crazytown.”

For example, he said, YouTube recommended anorexia videos to teenage girls who watched “diet” videos on the platform.

Maggie Stanphill, director of the Google User Experience at Google, used her testimony to describe Google’s commitment to the wellbeing of its users and repeated the company’s claim that its core values are privacy, transparency, and user control.

“We believe technology should play a helpful, useful role in all people’s lives, and we’re committed to helping everyone strike a balance that feels right for them,” she said. “This is why last year, as a result of extensive research and investigation, we introduced our Digital Wellbeing Initiative: a set of principles that resulted in tools and features to help people find their own sense of balance. Many experts recommend self-awareness and reflection as an essential step in creating a balance with technology.”

Some of the ways this initiative helps Google users is by initiating “Do Not Disturb” and “Wind Down” functions on Android phones (which Apple also provides on iPhones), which help to limit blue light emissions (which can disrupt sleep and damage skin) and phone use so that users can disconnect in a healthy way.

On YouTube, Stanphill said, Google provides opt-in “Take a Break” reminders for those who have been watching videos for extended periods of time.

At a Hoover Institution event in May, tech experts and tech industry members discussed how Google favors its own in-house products, directs users toward the Google experience and tries to keep them there as long as possible because it profits Google by providing the company with more user data to enhance its products and share with advertisers.

Schatz pointed out at the hearing that, because lawmakers only have limited anecdotal and circumstantial evidence about how tech giants use their algorithms, they want more transparency.

When Schatz asked Stanphill if Google uses persuasive technology to keep users using Google products as much as possible, Stanphill said, “We do not use persuasive technology.”

Schatz then asked, “Mr. Harris, is that true?”

Harris said it’s complicated.

“Dark patterns are not core to the whole family of companies, including YouTube,” Stanphill said. “We build our products with privacy, transparency and control for the users, and we build a lifelong relationship with the user, which is primary. That’s our trust.”

To which Schatz replied, “I don’t understand what any of that meant.”

According to Harris, the problem isn’t so much the algorithms themselves as it is the lack of accountability and transparency behind them, which may require regulation.

“The founder of Netscape said software is going to eat the world,” Harris said. “What he meant by that was, software can do everything more efficiently. So we’re going to let it eat up our elections, our health, our transportation, our media…and the problem was, it was eating the world without taking responsibility for it.”

Follow Kate on Twitter