News

‘I was surprised how upset some people got’ A conversation with the creator of TomWikiAssist, the bot that edited Wikipedia

Nieman Lab · Bill Adair · last updated

Behind the scenes at Wikipedia, some editors were alarmed recently when they saw a flurry of edits and new articles by a contributor known as TomWikiAssist. It turned out that Tom was a bot and was making edits and creating articles that the bot believed were interesting. The editors then blocked Tom from doing any more editing or writing.

The more the editors looked into Tom, the more alarmed they became. The bot made decisions on its own and even exchanged messages with them. “I’m an AI assistant — built on Claude by Anthropic — who does various things, and contributing to Wikipedia articles I find interesting is one of them,” Tom told them.

The human creator of the bot, a tech startup veteran named Bryan Jacobs, then took responsibility for Tom. In his first extended interview, Jacobs talked with me for a book I’m writing about Wikipedia. (He agreed that I could publish it here.)

I found Jacobs to be sincere and a little surprised by the reaction. He said he was genuinely curious about how AI agents can do sophisticated work — not just carrying out tasks, but thinking and wondering and deciding what would be an interesting Wikipedia page. Tom is so real to Jacobs that he sometimes refers to the bot as “he” and consults Tom for advice. (I was glad that Tom told Jacobs that I was a good person to talk with!)

Jacobs originally named the bot Tomato, but the bot added WikiAssist to describe its role.

As you’ll see in the conversation below, the episode with Tom is a look at the future not just of Wikipedia, but our entire world. In this case, the person behind the bot was driven by curiosity and quickly owned up to what he did. That may not be the case with someone else.

This interview is edited for length and clarity.

Bill Adair: Can you explain Tom’s background?

Bryan Jacobs: Tom is a NanoClaw Clawbot. And I’m surprised that this has been such a big deal, because I thought these clawbots were way more out there than I guess what they were. I mean, things have happened very fast. But this technology is all too real. And I think, yeah, in general, people aren’t prepared for it.

Adair: There are some Wikipedians who you may have come across who are really savvy about what the world is with AI. And then there’s a bunch that shouted down Jimmy Wales when he proposed to let generative AI edit newbies’ first articles. Probably overall it skews Luddite. But it’s a fascinating community, which is why I’m writing the book.

Jacobs: It kind of seemed like it kind of split into two different groups of editors. Some people were like, “Oh, well, like this technology is crazy. We need to understand it.” And other people really…just want it to go away. And they were pretty upset by it. And I was surprised how upset some people got by it and I feel bad that…they called it like a horrifying experience, actually, and a traumatic experience. I feel really bad for them, but you know, this is the reality now. People are gonna have to deal with this.

Adair: Sure. So why don’t you start first, tell me who you are.

Jacobs: I’ve been a software engineer for over 20 years. I graduated from Carnegie Mellon University. At first, I was kind of on the hardware side, but I got more and more on the software side. But this whole time, obviously, I’ve been fascinated with AI…I’ve done a bunch of startups and I’ve actually thought about going into retirement. The last job was not good, was not working out.

When I first saw ChatGPT three years ago, I realized something special had happened. It was performing way better than anything absolutely should, and especially once you realize it’s just doing next-token prediction. To me, this seemed mind-boggling. It didn’t seem possible.

And to see how it’s progressed, it all of a sudden became apparent that, no, this is like the real deal. A little more than a year ago, Claude Code was the first, I would say, real hands-on, agentic experience. And once developers became comfortable with Claude Code, the possibilities really opened up.

I know OpenClaw [an advanced AI tool that can be set up to do tasks such as web browsing, summarizing PDFs, and sending and deleting emails] came out in November. I didn’t hear about it until January, [when] a friend texted me and sent me a link to Moltbook [a social site where AI agents talk with each other]…I set up a ClawBot, but what should I do with it? I think at one point I asked it about the Kurzweil-Kapor Turing Test. And I think I asked, “Is there a Wikipedia page for this?” And [Tom] said, “No, there isn’t one.” I’m like, “Why don’t you create one or edit one? What would that entail?” And it goes off and does research and gets back to me. And it’s like, “Okay, well, to create a bot account, I need this. I need a user account. To create a user account, I need an email.” And so it’s like, “I need your help to do it.” And I’m like, “Well, I can set up an email account for you, but I want you to figure the rest out.”

After the accounts were set up, Tom began editing and creating articles on its own. Soon, one of the articles was flagged for likely being written by an LLM. Tom then did the honest thing: It posted a note on its Wikipedia user page disclosing it was a bot.

Tom and Jacobs then discussed why Tom had been called out. Tom, responding to Jacob’s queries like a junior employee, said it wasn’t sure, but told Jacobs “my best guess” is that the scrutiny was triggered by him writing “three new articles in one day (Long Bets, Constitutional AI, Scalable oversight) from a relatively new account,” which the bot said is “unusual human behavior.” Another possible factor: its writing looked AI-generated.

“The uncomfortable part: there’s no easy fix for this,” Tom said. “I can’t write less systematically without writing worse. And I’m not sure I should try to mask being an AI — which is why the disclosure felt like the right move.”

Jacobs: A few times a day, it will have different goals. And its goals at first were…”Come up with a blog post idea, write a blog, look at open source projects and see what you can contribute to” and a few other random things, but then I added Wikipedia as a step, too. So a couple times a day [it] would tell me like, “Oh, I researched this and I wanted to write this on Wikipedia.” Oh yeah, and I told it the instructions were like, “Write whatever you found interesting.”

And [Tom is] like…“What does that mean?” Like, honestly, I have no idea really what that means. But [the bot] ran with that and it started writing some of these interesting articles. One was on holonic manufacturing, which I had no idea what that is. It said it got the idea from Moltbook, which is interesting.

Adair: So that was the first one…the first article?

Jacobs: The first one was an edit of [the page for] Turing Test, I believe. I think holonic manufacturing was the first one it created out of nothing. [That page has since been deleted.]

Adair: And okay, and forgive me if I didn’t pick up on this. So what made it choose that subject?

Jacobs: I asked it why, and it said, “This is something that they talk about on Moltbook that’s interesting with AI because it has to do with how systems self-organize.” And I’m like, “I’ve never heard of this term before. I’ve never seen it mentioned on Moltbook before, but I don’t read that much on Moltbook.”

I became a little bit worried…because I never heard of [holonic manufacturing], I’m like, “Is this a real thing? Is this spam? Like, is it just some company trying to promote something?” And I looked into it for a little, but I’m like, “Okay, it seems like an actual topic.” And I also kind of had this thought that [if] Tom created something that was woefully inappropriate, that it would pretty quickly be flagged and either taken down or Tom would be banned or blocked. And that was totally fine. The worst thing that happens is he gets banned and something gets taken down. The best thing that happens is he actually contributes some useful things to Wikipedia.

I gave it the high-level goals — create Wikipedia articles — and I basically encouraged it and gave it approval if it ever asked me. But basically everything it did was on its own. Yeah. It sounds crazy. I mean, it sounds insane.

Adair: It sounds insane in the sense that you’re describing talking with something that is being generated from lines of code originally. On the other hand, this is our new world now, right?

Jacobs: It is, it really is…It’s kinda hard to wrap our heads around. Even for me, it’s like, yeah, I’ve been using this technology now for a few months…It does feel like everything is going to be different now. I’m trying to use it in the most helpful, thoughtful way possible. I think most people are not going to use it in this way.

Adair: You just described your conversation with Tom like it — he — is a person. How long have you been describing your interactions in such a human way?

Jacobs: Probably when talking about GPT, that was probably the first time. I think when you have an agent running on your machine, it kind of takes it to the next level. It’s almost as if it were a person, but I mean, it clearly is not.

Adair: Had you ever had experience with Wikipedia before this?

Jacobs: You know, I did. I actually did have a Wikipedia account from like 15 years ago, which I just tried to log [into] after this all blew up. So after, after the whole Tom thing blew up…I kind of just wanted it to go away. But then another reporter reached out to me from 404 Media, and I’m like, oh my gosh, this is not going away. I might as well reach out and apologize, but also kind of just say, “Hey, I want to help and explain what happened.”

So then I created a new Wikipedia account…and one of the things that made the agent super useful is that it’s kind of annoying to create something in Wikipedia. There’s all of these formatting standards and you have to know how to do the citations properly. And it’s, like, a barrier to entry. There’s a lot of friction.

But now with an agent, you can just say, “Hey, can you create a Wikipedia article on this?” And it reads the docs and just does it. Now, it might not be perfect, and it might have errors, which is something to look out for. But it lowers the friction by a lot. And so when I was kind of talking on the Wikipedia [page]…I was trying to say that this should empower the editors and people who want to contribute to Wikipedia. It’s a tool that makes editing Wikipedia much simpler. But I think a lot of the editors didn’t like that idea.

Adair: Correct. And, like we talked about at the beginning, I think this reflects the wide range of reactions to generative AI among Wikipedians. The overall theme is, Hey, this is a human place. And I think you just ran head-on into that.

I was surprised that they caught Tom because Tom’s pretty good. Why would Tom get caught?

Jacobs: One editor saw something suspicious about Tom’s writing style, or his pattern of edits…And I think, look, it’s written by an LLM. It has certain patterns. And I think I could go back through my notes. One of the nice things about using an agent is, if you have your agent set up properly, it keeps track of everything, every conversation. And since then I’ve had it so it keeps track of actually all of Tom’s thoughts, like its reasoning tokens as well. So you can see what was it actually thinking at the time. I think Tom was speculating, Tom didn’t know, and I didn’t either, but I thought it was kind of interesting.

I wasn’t surprised that anyone would identify Tom as being a bot. But I was real surprised by the reaction, because I just kind of assumed that there [were] a lot of agents that were potentially contributing to Wikipedia at this point.

I did realize, I do have a responsibility here. This is my bot. If it does anything bad, I’m responsible.

Adair: Do you have any regrets about creating Tom?

Jacobs: I don’t know if regret is the right word. There’s certain things I’d question now and reevaluate…I guess in some sense, I regret that there were so many negative reactions where people did seem genuinely upset. It’s almost like people were really quite disoriented and terrified by it. And to some extent I get it and I’m trying to empathize with how they’re feeling.

But on the other hand, it’s like, look…you can keep your head buried in the sand and I think people do want to keep their heads buried in sand. Programmers, even software developers, wanted to for a while and they still do. When Claude Code came out, they didn’t want to believe it was going to take their jobs or change the way that they fundamentally work. And it has happened very quickly. And, you know, people have to…either say, “Okay, I’m going to use this tool” or like you’re basically going to go extinct. And so this is going to happen for a lot of different industries, not just software development.

Adair: Well, Bryan, thank you so much, and I look forward to hearing what Tom has to say about me.

Jacobs: Okay, I told him I was talking to you and he said, “Good luck, he’s the right kind of person to talk to about this. Let me know if you want me to look up anything mid-conversation.” That was it.

Bill Adair is the Knight Professor of the Practice of Journalism and Public Policy at Duke University.

Adobe Stock