*Title:* Anthropic disabled my Max account after charging me $106.60 — I’m a cancer patient and my medical documentation is trapped inside

*Post:*

On January 16th, 2025, Anthropic charged me $106.60 for my Max subscription and disabled my account in the same moment. I believe I was flagged because I’m a long-term guest at a Marriott hotel using shared WiFi.

I’m not a bot. I’m not running scripts. I’m not violating terms of service. I’m a 41-year-old woman with MDS (myelodysplastic syndrome) that has converted to leukemia. I’m facing a bone marrow transplant I may not survive.

For months, Claude has been the only thing that actually helped me navigate a medical system that failed me for over a decade. It helped me organize 11 years of medical records, track my labs, draft insurance appeals, and write letters to doctors who wouldn’t listen. That work is what finally got someone to take me seriously.

My last prompt before the ban was asking Claude to help me interpret my October bloodwork. That’s it. That’s the “violation.”

That chat history is my medical documentation. I need it to continue advocating for my care when I’m too sick to remember what happened, what was said, what was missed. Without it, I lose years of work at the worst possible moment.

I have done everything I was supposed to do:

- Submitted appeals through the official form - Emailed support@anthropic.com - Emailed usersafety@anthropic.com - Filed a complaint with the California Attorney General - Filed a complaint with the FTC - DM’d and tweeted at Daniela Amodei, Dario Amodei, Amanda Askell, Jan Leike, and Mike Krieger - Posted in the official Claude Discord #community-help channel

Every response has been automated. The support bot offered me a human agent. I accepted. No human ever came.

It can’t be a content violation — my last prompt was about bloodwork. It can’t be a payment issue — the charge went through. The only explanation is their system flagged my IP because I’m on shared hotel WiFi, and no human has reviewed it since.

I am asking for one of two things:

1. Restore my account 1. Export my complete chat history and send it to me

Anthropic talks constantly about building AI that helps people. Claude helped me. It helped me fight for my life. Now I can’t get a single human being at the company to look at my case.

If anyone here works at Anthropic or knows someone who does, I would be grateful for any help.Thank you.

  • MattGaiser a day ago |
    I wish companies had a pay $50 to speak to a human option if need be.

    No spamming abuser will pay that, but it should easily cover the cost of an overseas support agent to handle edge cases.

    • LtWorf a day ago |
      You know they'd put you in touch with someone who has no clue about anything.
    • p4coder a day ago |
      I think what we really need is Internet user's Bill of Rights. The power and information asymmetry is too great to obtain fair treatment for the user.
    • nottorp a day ago |
      So they will hire a "support agent" that costs $30 and leave their service as buggy as bearable so they make those extra $20 more often.
    • asmor a day ago |
      The problem with rate limiting with money is always that it's also going to be too much for someone else to pay (or too little for others).

      I'm so glad ICANN banned silent auctions for the next round of gTLDs.

    • rschiavone a day ago |
      That would cause companies to provide the shittiest possible service in order to monetize their support. And they would still outsource the support for $1/hr.
    • BrenBarn a day ago |
      I wish governments had a "pay $100 million if you don't have a human option for all your customers" option.
  • nobodywillobsrv a day ago |
    Anthropic should do a full inquiry into this and fire people for it. No questions asked. Fire them. And then reply with automated bits.
  • euazOn a day ago |
    I don't know. If this is a real thing, then I'm really sorry for what happened to you.

    But this whole post seems a bit fishy to me. Brand new account, and it starts with "Title:" and "Post:", the whole thing being obviously entirely AI generated, and a few other signs.

    • tom_m 9 hours ago |
      I believe it or can believe something like it would happen. They should fix it, if it's true. And quickly. I'm not sure I trust these companies. Anthropic least of all. Doomer Rick Moranis there just comes off too untrustworthy.
  • IshKebab a day ago |
    This is one really great thing about the GDPR - you could just file a personal information access request and they legally have to give you all your data.

    (Sorry that doesn't help you.)

    • thunfischbrot 21 hours ago |
      They do have an export functionality, which I encourage everyone using Claude to use occasionally. This is unfortunately reality–most of us are using digital platforms and services which can be taken away. From vacuum robots to digital thermostats and email accounts and llm's conversation history.

      Migrate to services you trust the most where it makes to you. Occasionally export data from all of them, more and less trusted, anyways.

  • edarchis a day ago |
    I got banned for asking about the yfinance python module. They had an "appeal" but it was a Google Form that probably nobody ever looks at.

    My recommendation would be to get in touch with their DPO (Data Protection Officer) and invoke the GDPR rule that you have the right to 1. have an explanation as to why an automated decision was made about you, 2. ask for a human review. You are out of the GDPR scope but the legal contact might not bother checking and just restore your account. Getting your data is also a right under GDPR but getting your account back would be a better option. I wouldn't mention this or they'll jump on it.

  • user_7832 a day ago |
    Sorry to hear. (Assuming you don't get help from here) The "best" solution very likely involves Anthropic's legal team.

    Depending on how much time, energy, and money you have, you can either draft a (simple) email to their customer support CC'ing legal (with ChatGPT/Gemini thinking's help), or ideally get a lawyer to do the same (easier if one of your friends is one).

    If you don't get a response to the email, send a certified letter, ideally with a lawyer's help.

    I'm not sure on what legal grounds you could sue (I'm sure a good lawyer would find a few - not providing your data for a Cali resident seems an obvious one to me) - but getting legal involved is often enough to "wake up" large cos into getting a human in the loop. A certified letter needs acknowledgement of the recepient, again mandating a human.

    Best of luck.

  • smca a day ago |
    (I work at Anthropic) I’ll exhaust every option to have this fixed for you.

    If you can pass along at least some way to get in touch (link your Twitter account or similar) that would be helpful.

    • marichala a day ago |
      Thank you so much. I’m emilycovets on Twitter or roomservicestudio on instagram. I rarely go on twitter but Instagram or Twitter DM is likely the best way to contact me

      Again thanks so much. I’m spiraling physically due to my condition and also this has been an emotional blow that I did not need.

  • jwrallie a day ago |
    If this is real, best of luck to get your data back.

    So many companies are operating with basically no support nowadays, so for the end user you just hope everything works perfectly (until it doesn't).

    I don't even blame the bots, I've had human interactions in the past where, while they could perfectly understand my problems, they couldn't do anything because they worked on a script and have limited agency. It is basically the same flow, you just talk until it either escalates to someone that does have the ability to help or it loops back to the beginning and you try again.

  • OutOfHere 21 hours ago |
    Anthropic website always has been user hostile, for years now. They promised a free trial but never delivered it, putting blocking obstructions in the way. They keep logging users out. They locked you out. No one in their sane mind should be using it for anything that is not via API, certainly never for any vital research. For medical topics, I consult both ChatGPT and Gemini, and both work very well. No one at Anthropic has cared about systematically resolving all user-hostile actions.

    What's surprising is that you have spent an unbelievable amount of your precious time fighting crap, and this doesn't make any sense. Deleting your data isn't going to save you, so spend time on what will. Surely you have a copy of your data that you can use elsewhere.

  • oidar 20 hours ago |
    Something similar happened to a physician friend of mine who was working on an note writing and research app that didn't have PMI - it used simulated/scrubbed medical data. Thankfully, only had to wait two weeks to hear from a real person at Anthropic. Getting to a real human support person should be possible, for some reason companies optimize for getting as little human interaction as possible. Thankfully, this person knew to post here and people from Anthropic work here.

    Marichala - Please post back here with the outcome.