A recent research indicates that data brokers are packaging your mental health needs for sales, capitalising on the epidemic rise in telemedicine and therapy applications that capture details about your mental health requirements. There is no legislation that can stop them.

One firm advertised the names and addresses of those suffering from depression, anxiety, PTSD, or bipolar illness. Another offered a database of thousands of aggregated mental health information for $275 per 1,000 “ailment interactions.”

For years, data brokers have been operating in a contentious area of the internet economy, gathering and reselling Americans’ personal information for government or commercial purposes, such as targeted advertising.

However, the emergence of telemedicine and therapy applications in the post-pandemic period has created an even more problematic product line: Americans’ mental health data. And selling it without the person’s knowledge or consent is totally lawful in the United States.

A research team at Duke University’s Sanford School of Public Policy illustrates how wide the market for people’s health data has become in a paper published Monday.

After contacting data brokers to inquire about the types of mental health information available, researcher Joanne Kim reported that she eventually found 11 companies willing to sell bundles of data that included information on what antidepressants people were taking, whether they struggled with insomnia or attention issues, and details on other medical ailments such as Alzheimer’s disease or bladder-control issues.

Some of the data was sold in an aggregate form that would have allowed a customer to know, for instance, an approximate estimate of how many individuals in a specific Zip code could be depressed.

But other brokers gave individually identifiable data comprising names, residences and incomes, with one data-broker sales representative pointing to lists like “Anxiety Sufferers” and “Consumers With Clinical Depression in the United States.” Some people even provided an example spreadsheet.

It was like “a tasting menu for buying people’s health data,” said Justin Sherman, a senior scholar at Duke who oversaw the study team. “Health data is some of the most sensitive data out there, and most of us have no idea how much of it is out there for sale, often for only a couple hundred dollars.”

HIPAA, or the Health Insurance Portability and Accountability Act, governs how hospitals, physicians’ offices, and other “covered health institutions” handle Americans’ health information.

But the legislation doesn’t safeguard the same information when it’s transferred anyplace else, allowing app creators and other firms to lawfully share or sell the data anyway they’d like.

According to Kim, several of the data brokers provided official client complaint mechanisms and opt-out forms. However, because the organisations frequently did not state where their data came from, she said, many customers were likely unaware that the brokers had gathered their information in the first place. It was also unclear if the apps or websites had given their customers the option not to share the data in the first place; many organisations reserve the right to share data with marketers or other third-party “partners” under their privacy policies.

For years, privacy groups have cautioned about the uncontrolled data trade, claiming that the information may be abused by advertising or utilised for predatory purposes. Data brokers have been used by health insurance providers and federal law enforcement agents to investigate people’s medical expenditures and track down unauthorised immigrants.

Sherman advised that mental health data be handled with extreme caution since it may belong to persons in vulnerable situations and, if shared publicly or presented incorrectly, might have disastrous consequences.

Pam Dixon, the founder and executive director of the World Privacy Forum, a research and advocacy organisation, said at a Senate hearing in 2013 that an Illinois pharmaceutical marketing business sold a list of supposed “rape victims,” with 1,000 names beginning at $79. Shortly after her testimony, the business deleted the list.

Now, a decade later, she is concerned that the health-data issue has worsened, in part due to the increasing sophistication with which companies can collect and share people’s personal information — including not just in defined lists, but also through regularly updated search tools and machine-learning analyses.

“It’s a heinous behaviour that they continue to engage in. Our health data is part of someone’s business model,” Dixon added. “They’re inferring, scoring, and categorising trends in your life, your activities, where you go, what you eat — and what are we supposed to do, not live?”

The number of venues where individuals share personal data has increased as a result of a growth in online pharmacies, therapy apps, and telehealth services that Americans use to seek and acquire medical treatment from the comfort of their own homes. According to Jen Caltrider, a researcher with the tech startup Mozilla, whose team reviewed more than two dozen mental health applications last year and discovered that “the great majority” were “exceptionally disturbing.”

Federal officials have recently expressed an interest in more aggressively evaluating how corporations handle people’s health information. The Federal Trade Commission announced this month that it had reached an agreement with the online prescription-drug service GoodRx to pay a $1.5 million civil penalty after the company was charged with compiling lists of users who had purchased certain medications, including those for heart disease and blood pressure, and then using that information to better target its Facebook ads.

According to an FTC spokesman, “digital health firms and mobile applications should not profit from customers’ extraordinarily sensitive and individually identifiable health information.” GoodRx claimed in a statement that it was a “old issue” connected to a standard software method, known as tracking pixels, that allowed the firm to “advertise in a fashion that we feel was legal with regulations.”

Following the Supreme Court’s decision last summer to overturn Roe v. Wade and open the way to new state abortion prohibitions, some data brokers stopped providing location data that might be used to monitor who visited abortion clinics.

Several senators, including Elizabeth Warren (D-Mass. ), Ron Wyden (D-Ore.), and Bernie Sanders (I-Vt.), have endorsed legislation that would enhance state and federal power against health data exploitation and limit the amount of reproductive-health data internet businesses may collect and share.

However, the data-broker sector is unregulated at the federal level, and the United States lacks a comprehensive federal privacy law that would establish guidelines for how applications and websites treat people’s personal information in general.

Companies must register in a data-broker registration in two states: California and Vermont. California has over 400 businesses listed, some of which claim to specialise in health or medical data.

Dixon, who was not involved in the Duke study, expressed optimism that the findings and the Supreme Court decision will serve as a wake-up call about how this data may lead to real-world hazards.

“There are literally millions of women for whom the repercussions of information bartered, traded, or sold concerning parts of their health might have criminal ramifications,” she explained. “It is not hypothetical. It’s here in front of you right now.”