In partnership with

Table of Contents

News from Around Supportlandia

Batten down the hatches

The whole piece is worth reading, as Columbus identifies six common blindspots in the use or management of CX platforms (that are, alarmingly, not even that technically complicated), but this particular bit stood out to me for being painfully accurate (at least in my experience):

These six failures share a root cause: SaaS security posture management has matured for Salesforce, ServiceNow, and other enterprise platforms. CX platforms never got the same treatment. Nobody monitors user activity, permissions or configurations inside an experience management platform, and policy enforcement on AI workflows processing that data does not exist. When bot-driven input or anomalous data exports hit the CX application layer, nothing detects them.

Because CX is often neglected as a company's expensive black sheep of business functions, orgs can be pretty loosey-goosey with their tools, and CX agents often don’t get effective security and data privacy training.

And AI companies themselves can be lax about security, even in their advertising; just last week on LinkedIn, I talked about how dangerous it is to encourage employees to adopt an AI tool without consulting their company’s IT team just because it’s free.

This is why I say that, if you’re in CX, you're also in cybersecurity and trust and safety. We are the first and last defenders of our customers’ data and privacy, and we can’t assume that other teams will do this work for us. There are enough attack vectors in the evolving tech landscape as it is — we shouldn’t be one of them.

Cry havoc, Claude

Anthropic confirmed to Time on Tuesday that the company has decided to “radically overhaul [its Responsible Scaling Policy],” including its “promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance.”

At the risk of repeating myself (I’ve written several times about what a stupid idea this is), but it is fucking bonkers that any AI company would forge head-first into innovation without stopping to determine whether that technology could cause serious harm to humans, but it is especially fucking bonkers for a company at the forefront of AI research and technology to do so, as Anthropic very much is.

And it is really wild for them to make this decision for what are essentially revenue reasons, as chief science officer Jared Kaplain admitted in an interview with Time:

“We felt that it wouldn't actually help anyone for us to stop training AI models[.] We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”

Anthropic tries to argue in its new RSP that this isn’t a matter of money, but instead one of safety:

“If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe,” the new version of the RSP, approved unanimously by Amodei and Anthropic’s board, states in its introduction. “The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research.”

This argument is rationalizing, self-serving, and frankly, bullshit. It might work if we, Anthropic’s customers, didn’t have any say in what AI models and tools we used, but we do. I don’t know about you, but if I needed to pick a bus to ride, I’m going to pick the one that isn’t going to drive me right off a cliff simply because that’s what all the other buses are doing.

But it’s also bullshit for the simple reason that Anthropic clearly knows it’s bullshit:

Asked whether Anthropic was caving to market pressure, Kaplan argued that, in fact, Anthropic was making a renewed commitment to developing AI safely. “If all of our competitors are transparently doing the right thing when it comes to catastrophic risk, we are committed to doing as well or better,” he said. “But we don't think it makes sense for us to stop engaging with AI research, AI safety, and most likely lose relevance as an innovator who understands the frontier of the technology, in a scenario where others are going ahead and we're not actually contributing any additional risk to the ecosystem.”

They don’t want to “lose relevance as innovators” (read: lose market share) by being more responsible and safe than their competitors, and somehow think this won’t introduce any more risk, but how can they be certain of this if they’re training Claude first and only thinking of mitigation measures afterward?

(That “if all of our competitors are transparently doing the right thing” is also galling and doing a lot of heavy lifting — will Anthropic now only be as transparent as their competitors? Because if so, that’s a pretty low bar.)

Look, I like Claude. It’s helped me do a lot of stuff that I otherwise might not have been able to do. But as someone with practical and ethical concerns about AI technology, I picked Claude because of Anthropic’s considered approach to AI development, not despite it.

Anthropic was a leader in the AI space not just because of how powerful Claude can be, but also because it was a much-needed voice of reason. Giving up that voice in favor of making more money means also giving up that place as leader, and will only hurt Anthropic — and us — in the long run.

That’s some great free speech you have there, be a shame if something happened to it

In a statement yesterday, Netflix pulled out of the running to buy Warner Brothers, meaning that close Trump ally Larry Ellison or his son David is set to own it, HBO, CNN, CBS, Paramount, Discovery, and a decent chunk of TikTok.

There’s no doubt this is part of a plan to gain control of a large share of American media ahead of the midterms, because what better way to influence the outcome than to own what’s said and reported about it? We can only hope that the Justice Department will block the merger, but, uh, let’s just say I’m not holding my breath.

Incidentally, if you’re a writer of any kind, it’s more important than ever to own your audience, and — if you can — your platform.1 Don’t rely on social media like TikTok, X, or LinkedIn to distribute and house your content. Doing so was ill-advised even before Trump, and it’s a disaster waiting to happen now.

And Now for Some Good News

A light week for career news, but I think everyone is just as swamped as I am. 😅

Help me make this newsletter self-sustaining! Right now, I'm mostly funding this endeavor, but you can help support this newsletter by upgrading to a paid subscription, making a one-time donation, or sharing it in your network.

You can also help by clicking below to check out today’s sponsor, Mindstream. Just seeing what it has to offer will help them and me!

Unlock ChatGPT’s Full Power at Work

ChatGPT is transforming productivity, but most teams miss its true potential. Subscribe to Mindstream for free and access 5 expert-built resources packed with prompts, workflows, and practical strategies for 2025.

Whether you're crafting content, managing projects, or automating work, this kit helps you save time and get better results every week.

Read, Watch, and Listen

Read

Erica Beyea wrote a great piece for KnowledgeOwl explaining important frameworks in knowledge management (like KCS and SKMS).

Wired’s Steven Levy wrote that Wall Street is suffering from AI psychosis. (I haven’t even defined it, but I already know you’re nodding your head).

If you need something to spark joy, Craig Stoss made a delightful site where you (or, you know, your toddler) can keysmash.

That my favorite Mercer got her white sauce is a win for us all.

Watch

Decorated Olympic athlete Eileen Gu takes us into her head. (I’m normally not a sports person, but I love how she articulates things and strive to match her smarts.)

Listen

Upcoming Events

How to Start a Career in Community Management
February 28 at 11:00am ET. Online event hosted by CMX by Bevy. RSVP here.

CX Virtual Summit
March 2 at 10am ET. Online event hosted by Instant Teams. Register here.

Beyond the Basics: AI in GTM
March 3 at 5:30pm PT, Intercom office, San Francisco, CA. Hosted by Customer Success Meetup. Register here.

What High-Performing CX Teams Automate (and What They Never Will)
March 4 at 3pm ET. Hosted by Front, feat. Kevin Yang (Front), Alyssa Medina (Fathom). Register here.

Frameworks, Not Guesswork: Making Better Decisions with AI
March 6 at 1pm ET. Online event hosted by ElevateCX, feat Alex Hong (Syncly), Chrissy Sebald (Boldr), and Erica Clayton (Forethought). Register here.

From Contribution to Influence - Using Your Strengths to Shape What Comes Next
March 10 at AutogenAI, London, UK. Hosted by Women of Customer Success. Register here.

Customer Success Summit New York
March 10-11 at Convene, NYC, NY. Hosted by Customer Success Collective. Get tickets here.

Chief Customer Officer Summit New York
March 11 at Convene, NYC, NY. Hosted by Customer Success Collective. Request invite here.

The AI Trust Gap in Support
March 12 at 2pm ET. Online event hosted by Hiver, feat. Karen Lam (Top Hat), Christian Sokolowski (Rebuy Engine), Sarah Caminiti (SupportNinja), Luke Via (Hiver). Register here.

Community Led Growth MicroConf
March 13 in NYC, NY. Hosted by Tightknit. Get tickets here.

A CCO’s view: How Ironclad elevates CX through B2B support
March 17 at 12pm ET. Online event hosted by Customer Success Collective. Register here.

Customer Success Conversation & Connection
March 18 at 5:00pm GMT, Market Halls Paddington, London, UK. Hosted by Angela Scott. Get on waitlist here.

Success Amplified: At The Top
March 25 at 1:30pm EDT in NYC, NY. Executive forum hosted by Women of Customer Success. Keynote by Cassie Young. Get tickets here.

1 I don’t own my platform (I use beehiiv), but that’s only because this isn’t my main gig. If I needed to, I could switch very easily to a self-hosted Ghost site. If you can’t own your platform, the next best thing is a quick-to-execute backup plan.

2 I picked the title for my coverage of Anthropic’s new RSP (Cry havoc, Claude) before I listened to the podcast, so I’m keeping it. But I’m not surprised that we both found that quote imminently appropriate.

That's it for this week! If you have items for the Roundup you'd like to submit, you can do so at [email protected], but be sure to check out the Roundup FAQs first.

All of Support Human's content is free forever for individuals. You can power this content with a coffee, by subscribing, and by sharing to your networks! Any support is welcome and hugely appreciated!

Reply

Avatar

or to participate

Keep Reading