universeodon.com is part of the decentralized social network powered by Mastodon.
Be one with the #fediverse. Join millions of humans building, creating, and collaborating on Mastodon Social Network. Supports 1000 character posts.

Administered by:

Server stats:

3.5K
active users

Learn more

There are two problems that are coming for Mastodon of which apparently an awful lot of people are unaware. These problems are coming for Mastodon not because of anything specific to Mastodon: they come to all growing social media platforms. But for some reason most people haven't noticed them, per se.

The first problem is that scale has social effects. Most technical people know that scale has technological effects. Same thing's true on the social side, too.

🧵

CC: @Gargron

For instance, consider the questions "How likely, statistically speaking, are you to run into your boss on this social media platform?" and "How likely, statistically speaking, are you to run into your mother on the social media platform?" While obviously there is wide individual variation based on personal circumstances, in general the answer to those questions is going to be a function of how widespread adoption is in one's communities.

Thing is, people behave differently on a social media platform when they think they might run into their boss there. People behave differently when they think they might run into their mother.

And it's not just bosses and mothers, right? I just use those as obvious examples that have a lot of emotional charge. People also behave differently depending on whether or not they think their next-door neighbors will be there (q.v. Nextdoor.com).

🧵

How people behave on a social media platform turns out to be a function of whom they expect to run into – and whom they actually run into! – on that social media platform. And that turns out to be a function of how penetrant adoption is in their communities.

And a problem here is that so many assume that the behavior of users of a given social media platform is wholly attributable to the features and affordances of that social media platform!

It's very easy to mistake what are effects of being a niche or up-and- coming platform for something the platform is getting right in its design.

The example I gave about people behaving differently depending on what the likelihood is they estimate of running into certain other parties in their lives is not the only example of how scale affects how people engage with a social media platform. There are others that I know about, and probably lots I don't.

🧵

For instance, tech people are probably aware of the phenomenon that virus writers are generally more attracted to writing viruses for platforms that have more users. This is one of the main reasons that there are (and have always been) fewer viruses written against the macOS than Windows.

You've probably never thought of it this way – mad props to the article in Omni I read a long time ago that brought this to my attention – but writing a virus is a kind of *griefing*. Like in a game. It's about fucking up other people's shit for kicks and giggles, if not for profit, and doing so at scale.

Well, griefers – people who are motivated by enjoying griefing as a pastime – are going to be more drawn to bigger platforms with more people to fuck with.

Deliberate malicious obnoxiousness and trolling varies not *linearly* with population size, but *geometrically* or worse.

🧵

Or put another way, a social media platform can avoid a certain amount of social griefing just by being small, and therefore not worth the time of griefers who are looking for bigger fish to fry. As that platform grows, it loses that protection.

So you can't tell, not for sure, how good a platform's systems are for managing that kind of griefing until it gets big enough to really start attracting griefing at scale.

🧵

So that's one problem: there are simply social size effects, that affect how people behave on a social media platform, so as the platform grows in adoption, how people behave on it will change. Usually not in ways that are thought of as for the better, because being a niche platform can avoid various social problems that can no longer be avoided as it grows.

The other problem I think is even more fascinating.

When a social media platform is founded, there are filter effects on who joins that platform. But as a social media platform grows, those filters – some of them – fall away.

🧵

When I talk about filters, I mean things like the following famous examples:

* When Facebook was founded, it was only for students at universities; one could only sign up for it with a college email address. Consequently, Facebook's early userbase was almost entirely college students – with all that implies for socioeconomic class.

* When G+ was founded, it was initially opened to Google employees, and used an invite code system for rollout, such that overwhelmingly its early users were people in the same social worlds as Googlers.

* In the heyday of USENET, the vast majority of internet users, at all, were college students who are majoring in technical topics.

These social spaces, consequently, inherited (in the object oriented sense) the social norms of the demographics that initially populated them.

🧵

Regardless of the specifics of what different platforms' initial userbases are, one of the fascinating consequence of having such filters is a higher level of social homogeneity.

I know it doesn't seem like a very high level of social homogeneity when you're in it. "What are you talking about, lady?! We have both emacs users AND vi users!"

But in a way that is largely invisible at the time to the people in it, they're in a kind of cultural bubble. They don't realize that a certain amount of social interaction is being lubricated by a common set of assumptions about how people behave and how people *should* behave.

Now they may not like those assumptions very much, they may not be very nice assumptions or ones they find are very agreeable. But they're *known*. Even if unconsciously or inchoately. And that turns out to count for a lot, in terms of reducing conflict or making it manageable.

🧵

But, of course, as a social media platform grows, those filters change or fall away.

Facebook expanded enrollment to high school students, then dropped the requirement of an educational affiliation all together.

AOL, which at the time was mailing physical install media to every mailing address in the United States, unsolicited, repeatedly, plugged itself into USENET and opened the floodgates in an event that is referred to as the September That Never Ended.

(For those of you who don't know, that term refers to the fact that previously, large numbers of clueless users who didn't know how to operate USENET only showed up at the beginning of the American academic year. AOL not being tied to the academic calendar and having large numbers of new users every day, effectively swamped the capacity of USENET culture to assimilate new members by sending a September's worth of cluelessness every month forever thereafter.)

🧵

Additionally, as a social media platform becomes more popular, it becomes more worth the effort to get over the speed bumps that discourage adoption.

We've already seen this with regards to Mastodon. Where previously an awful lot of people couldn't be bothered to figure out this whole federation, picking-a-server thing to set up an account in the first place, of late it is seemed much more worth the effort of sorting that out, not just because Twitter sucks and its users are looking for an alternative, but because Mastodon has become more and more attractive the more and more people use it.

So people who once might have been discouraged from being Mastodon users are no longer discouraged, and that itself is the reduction of a filter. Mastodon is no longer filtering quite so much for people who are unintimidated by new technologies.

Now you might think that's a good thing, you might think that's a bad thing: I'm just pointing out it IS a thing.

🧵

Over time, as a social media platform becomes more and more popular, its membership starts reflecting more and more accurately the full diversity of individuals in a geographic area or linguistic group.

That may be a lovely thing in terms of principles, but it comes with very real challenges – challenges that, frankly, most people are caught entirely by surprise by, and are not really equipped to think about how to deal with.

Most people live in social bubbles to an extent that is hard to overstate. Our societies allow a high degree of autonomy in deciding with whom to affiliate, so we are to various degrees afforded the opportunity to just not deal with people that are too unpleasant for us to deal with. That can include people of cultures we don't particularly like, but it also includes people who are just personally unpleasant.

🧵

Many years ago, at the very beginning of my training to become a therapist, I was having a conversation with a friend (not a therapist) about the challenges of personal security for therapists.

She said, of some example I gave of a threat to therapist safety, "But surely no reasonable person would ever do that!"

"I'm pretty sure," I replied, "the population of people with whom therapists work is not limited only to people who are *reasonable*."

I think of that conversation often when discussing social media. Many of the people who wind up in positions to decide how social media platforms operate and how to try to handle the social problems on them are nice, middle class, college educated, white collar folks whose general attitude to various social challenges is "But surely no reasonable person would ever do that!"

🧵

As a social media platform grows, and its user base becomes more and more reflective of the underlying society it is serving, it will have more and more users on it who behave in ways that the initial culture will not consider "reasonable".

This is the necessary consequence of having less social homogeneity.

Some of that will be because of simple culture clash, where new users come from other cultures with other social expectations and norms. But some of that will be because older users weren't aware they were relying on the niche nature of the platform to just *avoid* antisocial or poorly socialized people, and don't really have a plan for what to do about them when they show up in ever greater numbers, except to leave, only now they *can't* leave, not with impunity, because they're invested in the platform.

So the conflict level goes up dramatically.

🧵

As a side note, one of the additional consequences of this phenomenon – where a growing social media platform starts having a shifting demographic that is more and more culturally and behaviorally diverse, and starts reflecting more and more accurately the underlying diversity of the society it serves, and consequently has more and more expressed conflict – is that a rift opens up between the general mass of users, on the one hand, and the parties that are responsible for the governance of the social media platform, on the other.

This is where things go really sour.

That's because the established users and everyone in a governance position – from a platform's moderators to its software developers to its corporate owners or instance operators – wind up having radically different perspectives, because they are quite literally witnesses to different things.

🧵

The established users, who are still within their own social bubbles, have an experience that feels to them like, "OMG, where did all these jerks come from? The people responsible for running this place should do something to fix it – things were fine here the other day, they need to just make things like they used to be. How hard could it be?" They are only aware of the problems that they encounter personally, or are reported to them socially by other users or through news media coverage of their platform.

But the parties responsible for governance get the fire hose turned on them: they get to hear ALL the complaints. They get an eagle's eye view of the breadth and diversity and extent of problems.

Where individual users see one problem, and don't think it's particularly difficult to solve, the governance parties see a huge number of problems, all at once, such that even if they were easy to solve, it would still be overwhelming just from numbers.

🧵

But of course they're not necessarily as easily solved as the end users think. End users think things like, "Well just do X!" where the governance team is well aware, "But if we did X, that might solve it for you, but it would make it worse for these other people over here having a different problem."

The established users wind up feeling bewildered, hurt, and betrayed by the lack of support around social problems from the governance parties, and, it being a social media platform, they're usually not shy about saying so. Meanwhile, the governance parties start feeling (alas, not incorrectly) their users are not sympathetic to what they're going through, how hard they're working, how hard they're trying, and how incredibly unpleasant what they're dealing with is. They start feeling resentful towards their users, and, in the face of widespread intemperate verbal attacks from their users, sometimes become contemptuous of them.

🧵

The dynamic I just described is, alas, the best case scenario. Add in things like cultural differences between the governance parties and the users, language barriers, good old fashioned racism, sexism, homophobia, transphobia, etc, and any other complexity, and this goes much worse, much faster.

🧵

For anyone out there who is dubious about this difference in perspective between the governance parties and the end users, I want to talk about the most dramatic example of it that I personally encountered.

There used to be on LiveJournal a "community" (group discussion forum) called IIRC "internet_sociology". Pretty much what it sounded like, only it was way more interested in the sociology (and anthropology) of LiveJournal itself, of course, than any of the rest of the internet.

Anyways, one day in, IIRC, the late 00s, somebody posted there a dataviz image, of the COMPLETE LiveJournal social graph.

And that was the moment that English-speaking LiveJournal discovered that there was an entirely other HALF of LJ that was Russian-speaking, of which they knew nothing, and to which there was almost no social connection.

🧵

For LJ users who had just discovered the existence of ЖЖ, it was kind of like discovering the lost continent of Atlantis. The datavis made it very clear. It represented the social graph of the platform they were on as two huge crescents barely connected, but about the same size. And all along, the governance parties of LJ were also the governance parties of ЖЖ.

And it turns out, absolutely unsurprisingly, LJ and ЖЖ had very different cultures, because they had had different adoption filters to start out with. LJ initially had been overwhelmingly adopted by emo high school students as a *diary* platform (LJ once jokingly announced it was adding an extra server just to index the word "depression".) ЖЖ had initially been adopted by politically active adults – average age, in their 30s – as a *blogging* platform.

🧵

Turns out, also absolutely unsurprisingly, these two populations of users wanted *very* different features, and had quite different problems.

One of the ways LJ/ЖЖ threaded that needle was to make some features literally contingent upon the character set a user picked. LiveJournal literally had "Cyrillic features": features that had nothing to do with the character set itself, but that only turned on for an account if it elected that character set.

🧵

Also unsurprisingly, when a Russian company bought LJ/ЖЖ from an American company, the governance parties started prioritizing the ЖЖ users' issues and feature requests, to the considerable confusion and distress of the LJ users who were unaware of the entire existence of ЖЖ. "Why on Earth would we want a feature that does *this*? Why would they think we would want it? Is LJ *insane*? What are they trying to make this place?" No, whatever feature it was actually was a pretty attractive one for someone who's a political blogger trying to maximize their reach, i.e. ЖЖ users.

You can see how a pretty enormous rift can open up between end users, who have literally no clue as to some of the most basic facts of the platform – like, say, entirely 50% of the user base is radically different from them in language and culture and usage patterns and needed affordances – and the governance parties who are trying to juggle all the anvils, some of which are on fire.

🧵

There's a little exercise one can do, if one is an enduser of a social media platform (or for that matter a governance party to a niche social media platform that has yet to hit the upslope of the diversity wave) and one wants a better sense of what governance parties have to deal with. If you've ever had a retail job dealing with the general public, just remember what the general public was like to deal with when you waited tables or ran a register or took orders or answered phones.

And if you've never had such a job yourself, or it's been a while, take yourself to a place like Reddit's r/TalesFromRetail or r/TalesFromYourServer and check out the sorts of things people who deal with the general public find themselves having to deal with.

And then reflect on this: all those irrational, entitled, belligerent, obnoxious people are loose in the world, and as your social media platform grows, it will eventually hoover THEM up from the bottom of the pool.

🧵

Because that – and worse (so very much worse) – is what your governance parties have to deal with.

I don't just mean governance parties have to deal with rude people being rude to them. First of all the problem is so much worse than mere rudeness, and social problems extend far beyond problems between two parties being in some sense in conflict. But secondly, and more importantly, it's *made their problem* when someone is "rude" to someone else. They don't just have to deal with obnoxious people being obnoxious at them, they have to in some sense do something about obnoxiousness in general, and are often put in the position of having to show up and confront the obnoxious person in some sense, or otherwise do something to frustrate the obnoxious person, which will probably not make them less obnoxious and also bring the governance party to the attention of the obnoxious person.

🧵

And if you are yourself a governance party who finds yourself having more and more difficulty empathizing with and respecting your end users, maybe remember what it was like to *be* an end user, and to largely be helpless to handle all sorts of social problems oneself, and to be stuck relying on authorities who may be unsympathetic, actively hostile, and/or just both clueless and clue-resistant.

I mean, just reflect on what it was like to be a Twitter user over the last year. Only don't let yourself use the cop out of "But you don't have to be a Twitter user, you can leave Twitter."

🧵

A lot of people, especially on the Fediverse, wind up being governance parties precisely because they don't want to be disempowered anymore. They want to be the people who make decisions about how to solve social problems on their social media platform of choice, and Mastodon/etc makes that much easier than trying to get a job with Twitter's Trust and Safety team.

So it's worth remembering if you are a governance party on the Fediverse, that's great for you, you're all empowered by that arrangement – but your end users are still end users. They get search if you choose to give them search. They still rely on your sitting in judgment of the reports they file on other users to take action on bad actors on their instance. They still experience themselves as largely just as disempowered as they were on Twitter. They have a choice of what lord's fields to till, but they're still peasants.

🧵

Returning to my larger point about the two problems that are coming for Mastodon: I'm seeing a lot of people make a lot of assumptions about how well things are working, in terms of solving social problems, that are basically predicated on not knowing that these two problems are bearing down on us.

This puts me in the weird position of actually arguing against empiricism. I'm usually a big fan of "throw it against the wall and see if it sticks" experimentalism as a remedy for head in the clouds theorizing.

But this is really a situation in which foresight is desperately necessary.

It is simply not accurate to extrapolate the efficacy of various attempts to solve social problems on Mastodon based on how well they've worked so far.

When you're climbing an adoption curve, past performance is not a guarantee of future results.

🧵

Siderea, Sibylla Bostoniensis

A couple decades ago, Clay Shirky gave a talk which he then published as an essay, "The Group Is Its Own Worst Enemy", about how over and over and over again people who develop online social spaces get surprised by things that happened on their online space – thing which had happened previously on OTHER parties' online social spaces, and which those social spaces' governance parties had attempted to warn others about.

Now, I have a bunch of reservations about specific details in that essay, but he was sure right about how over and over and over again Bad Things happen to social platforms, and the governance parties who lived through them try to warn others, and they're pretty reliably ignored.

Maybe we could not do that this time?

🧵

[break over, resuming]

Now, I certainly don't have a proposed one right answer to what a social media platform should be doing to solve all of these ensuing problems, and I certainly hope nobody thought I did.

But what I do have to propose is a set of attitudes and approaches to building out a social media platform to try to avoid some of the bad outcomes that other platforms have experienced.

My biggest point here is to simply not have a kind of foolish hubris of thinking that because something hasn't been a problem *so far*, that it's been solved.

As with so many things, I think it helps enormously to look into the history of previous attempts to get advanced warning of the circumstances one may find oneself in. And, of course in the case of social media, by "may" I mean "almost certainly will".

There are things that most definitely do not need to be surprises anymore.

🧵

And I want to point out something else that's probably crucial to learning from past mistakes.

When we build a social media platform – when we build anything to allow people to interact in the internet – we are doing something very like building a planned city. We are making decisions about the structures through which people will flow and move and rest and encounter one another and interact with one another.

When architects are designing physical buildings and when urban planners are laying out physical cities, they make decisions about physical structures with the intention of those structures shaping human behavior. People who build amphitheaters are people who want there to be public addresses that many people here, whether political speech or entertaining theater. People who build temples are people who want there to be collective religious worship. People who build roads want there to be travel.

🧵

Of course architects can choose to build buildings to meet other criteria, besides the effects on the people that interact with them. They can choose to make buildings that support the environment, or save the owners' money, or achieve some political end. They can also build buildings to have social effects not just through their affordances but through aesthetics, such as being beautiful to improve a neighborhood's appearance or to aggrandize an aristocracy.

But primarily buildings are built to be used, and as such they are tools, and we judge them, as we do all tools, by how fit they are for their purpose, whatever that might be.

And the purposes of buildings are to afford various ways of people interacting or avoiding interacting.

So architects think a lot about that. It's a whole thing.

Those who put together social media platforms need to think about the same sort of thing.

🧵

We need to be very conscious that the decisions that are made of how a platform works are decisions that affect how the people who use that platform will interact.

There should be a kind of intentionality – which is something I think Mastodon is doing way better at than a lot of social media projects – around functionality decisions.

But that intentionality has to go beyond merely meaning well. Good intentions poorly informed result in bad outcomes that were never intended but are, nevertheless, still bad.

There is a lot to be said for realizing that decisions for how social media platforms *work* are deliberate attempts to shape – to *engineer* really – human social life on a huge scale. On a scale so huge in fact, that it is not wrong to describe it as trying to *engineer societies*.

🧵

It's unfortunate that the term "social engineering" has a previous meaning as a slang term among computer programmers for a kind of attack on a system that leverages human frailty as opposed to faults in the software, because this – the design of social media platforms – is truly *social engineering*.

From where I sit, with a foot in both the technological and the social sciences, it seems really clear to me that there is no general sense that there is such a field as the engineering of online society. Not their underlying technologies, but the use of technological deployment to instantiate social spaces, that bring about certain social realities.

This is not a thing that is taken seriously. To the contrary, it's treated quite lightly.

🧵

The social media world is filled with people just pulling ideas out of their asses and hoping it all works out.

Folks who have been around the block a few times in a governance role have started amassing a body of lore. Case studies, observations they made in the trenches.

At the very least, availing oneself of what they have to share is a good first step.

But if we were to take this seriously as engineering, well, that suggests a few things, doesn't it?

It suggests we get a little bit more sciency about this. It suggests we start imposing a little bit of rigor.

🧵

Engineers tackle well-specified problems, and if the problems they are asked to tackle are not well-specified, they'll either nope out or they'll come up with their own spec.

It would probably do us good to spec out problems we think we're solving more precisely.

I cannot tell you how many conversations I have seen about the topic of "moderation" and how necessary it is in which nobody has ever bothered to set down what exactly it is that they think a moderator is supposed to accomplish.

I mean, it's all of them. I've been on the internet since the 1980s, and I have never seen anyone stop and actually talk about what they thought moderators were trying to do or should try to do.

That makes it a little tricky to evaluate whether or not moderators are given adequate tools to do their jobs. What with not actually having any agreement or understanding or even specification of what those jobs are.

🧵

This specific example is on my mind in part because of reading @kissane's article on Facebook's role in the genocide of the Rohingya in Myanmar. One of the things it mentions is that Facebook's internal apparatus for what we might call moderation was its "bullying-focused 'Compassion Team'". Like many social media platforms constructed by the sorts of people who construct social media platforms, Facebook construed the problem of moderation being one of preventing or discouraging interpersonal conflict on the platform.

But the problem unfolding in the Burmese-language parts of Facebook was not people disagreeing with one another or expressing conflict with one another. It was their *agreeing* with one another.

Agreeing to go kill their neighbors.

This was not something that was even on Facebook's radar, apparently.

🧵

This raises some very fundamental and quite interesting questions about what the role of moderation is on a social media platform. Is it the job of a social media platform to prevent people from using it to collaborate to commit crimes?

Historically, a lot of people who have put together social media platforms have insisted it is absolutely not the job of the platform – or the people who run it – to do that.

But if it's not the job of the platform to do that, whose job is it, when a platform, by its affordances, makes real world crimes – horrendous, very serious "real-world" crimes like actual genocide – not just more likely, but so much more likely they are effectively enabling a crime that wouldn't otherwise happen?

Why should our societies – our larger, meat-world societies – tolerate the building and operating of social media platforms that destabilize them and are detrimental to them?

🧵

Or put another way, why should our societies tolerate the existence of *irresponsibly* designed and operated social media platforms, that increase violence and other antisocial behavior?

So it turns out the failure of internet culture to actually have a discourse around what even moderators are supposed to be doing is a literally lethal mistake.

And this example is merely one wrinkle in the much, much larger conversation about what moderation is, and the diversity of things that it can be, and maybe should be.

A conversation that has to happen before you can have the conversation that goes, "Okay, of the things that moderation can be, which things do we think it needs to be on our platform, and what do we need to do, in the design of our platform, to bring it into existence and make it work the way we think it should?"

🧵

Consider what has unfolded recently with Reddit turning off its API, such that tools its moderators relied on are no longer available to them. Reddit's structure is that it allows anyone to start their own forum and gives them authority to moderate it however – to a first approximation – they see fit. But it doesn't provide the tools necessary – nor, any longer, allow third parties to provide those tools – such that many moderator functions can be performed, so there's a limit to what kinds of moderation can happen there, and how well it can be acquitted. This has literally changed what kinds of conversations and what kinds of forums can happen on Reddit.

🧵

Now, I'm not party to what's happening inside Reddit. I don't know the logic of their decisions. But I do know a whole lot of very thoughtful Reddit users who have spaces they moderate on Reddit have explained in great detail and length ("Concision is not our brand." - a mod from r/AskHistorians explaining on Twitter about this very thing) what their needs are and why they were objecting to Reddit turning off the API.

Reddit corporately decided that supporting those affordances was unimportant, or at least less important than something else that conflicted with them.

Reddit made a design decision that changed the nature of what moderation *could* mean on Reddit. They reduced its scope. That, in turn, changed how moderators could interact with the users they moderated, and that in turn changed how users interacted with one another.

🧵

Like I said, I'm not party to Reddit's internal corporate thinking. But I think it's a pretty good educated guess to say: Reddit's decision was not based upon what would optimize Reddit's social functions. When Reddit made this decision, I'm feeling pretty confident it was not a *social engineering* decision. It was not made to make Reddit function better in some social sense. Nobody made this decision thinking, "Actually, reducing the capacity of moderators to do tasks that are part of moderation will actually improve the social reality of Reddit in this particular way."

At very best, this decision was made to optimize something else in full awareness, "Yes, this will be detrimental to Reddit's social world, but it can't be helped, because of other considerations that outrank quality of social engineering right now."

But of course, the social effect on Reddit might have been simply dismissed, or discounted.

🧵

It's a pretty common thing for people to scoff at the idea that the affordances of a platform have something to do with how people behave on it, and that if you make the wrong decisions about how your platform operates you'll get outcomes that you won't like.

It's a pretty common thing for people to take the attitude, "Oh, geeze, what difference does it make whether this feature exists? People will do whatever they want to do anyway."

One of the things Mastodon has going for it is a userbase that mostly doesn't cop out like that. Mastodon is full of plenty of people who believe deeply that how the software works and what its affordances are actually a matter immensely to how social life on Mastodon unfolds.

The problem isn't convincing Mastodonians that these things matter – it's convincing them to not take their first impressions from traumatic experiences on Twitter as gospel truths.

🧵

It's probably pretty easy for Reddit executives to sniff and say, "Well you know the user base, they're making a big to-do about nothing; they'll figure out how to moderate without those tools, they're hardly critical."

Mastodonians are smarter than to do that, but we have a bit of a problem of falling down at the next step. It's great that people here don't scorn the idea that affordances matter to user behavior, but the next step is to actually find out how affordances actually do affect user behavior.

Like, we could imagine a more enlightened Reddit not cutting off its moderators at the knee by shutting down the API access they needed. But we could also imagine an even more enlightened Reddit than that, one that built its own versions of those moderator tools right into its own platform, so that moderators didn't have to use third party tools across the API.

But we could also imagine an even more enlightened Reddit than *that*.

🧵

We could imagine an alternate reality, even more enlightened Reddit, that not only had its own built-in moderator tools, it actually was concerned with the question of whether or not it had the right moderator tools, and how those tools affected how Reddit functioned, socially.

It might do things like hold focus group discussions with moderators, send anthropologists into various subs to observe the behavior of moderators and to virtually shadow moderators going about their moderating tasks, A/B test moderator tools, have opt-in betas of new moderator tools.

You know, basic grown-up company stuff, when a company actually cares about how its software functions.

But not just that.

🧵

We could imagine an ultra enlightened Reddit that actually has opinions about how it wants its social world to function, and actually makes decisions like, "We don't like some things that we think are maybe a product of moderators not providing 24/7 coverage of high volume groups, so we're going to find out if there's some way, or ways, plural, of solving that problem. We'll investigate whether there are things we can do to either facilitate moderators providing 24/7 coverage, or obviating the need for moderators to provide 24/7 coverage, and then we'll evaluate whether or not it worked to remedy the things that we saw as problems that we think are being caused by that."

In the social services field, this crucial last bit is called "program evaluation".

🧵

I think of it as sort of like calling one's shots in billiards. You decide what change you would like to see in the system you are designing for, you come up with an "intervention" (based on studying the problem, reading into other people's approaches to trying to solve the problem, and maybe doing a bunch of rounds of iterative experimentation), you decide, up front, how you will determine whether or not the problem has been solved, and then you implement the intervention, and then you check those criteria you previously identified as the ones that will determine whether or not the problem has been solved.

This is what I mean by rigor. This is pretty sciencey, no? It's not necessarily a controlled experiment, but it does have the form of an experiment. But it's not a *mere* experiment, either. It's not just a trial to see whether or not something will work. It's an attempt to actually do something that will work. With some slightly more rigorous testing as to whether it did.

🧵

Part of what makes it so rigorous is how formal it is, with that business of deciding in advance what the criteria of success will be. That in turn requires a certain amount of serious thinking about social phenomena, and actually getting explicit about things that, frankly, usually just get hand waved through when we're talking about social media platforms. Questions like "what even do we expect our moderators to be achieving?"

It means doing hard things like asking, "Okay, we want people to 'feel more safe': how will we be able to tell that people are feeling more (or less) 'safe'? If people were feeling more or less safe, how would we know? How would we be able to measure it? To observe it in the data? What is it that we are assuming will change in people's behavior based on how safe they feel?"

In the social sciences, this is called "operationalizing" an abstraction or concept or feeling.

🧵

One of the things you might notice from the fact that now twice I've mentioned there's other fields that have technical terminology that applies here: Oh hey. There are other fields that have clues that pertain. They have methodologies and other cool toys you might want to play with.

Engineering is, to a first approximation, applied science. So if you want to engineer socials, you might want to start hitting up the social scientists and people in other fields that apply social science.

This is something else Mastodon has going for it: it's got social scientists around here somewhere.

Now, I appreciate what I propose here has a bootstrapping problem. I don't know whether any of the decision makers about code, protocols, or individual instances have the capacity to enlist the help of the people on Mastodon with those professional clues, and I'm not sure Mastodon has the affordances to bring them together.

🧵

@siderea I read this entire thing and I just want you to know that never before have I been so offended by something I completely agree with!

"When you're climbing an adoption curve, past performance is not a guarantee of future results."

Great point

@siderea have you heard of User Experience Design or HCI? They don’t get respected as much as they ought to, but they do at least exist.

@irenes Oooh, thanks! I will check that out!

@siderea

On the subject of content moderation and the responsibility of a service provider, such as CloudFlare, or a social media service such as Facebook to determine which modes of moderation to enact, I feel that a great deal of the lack of such things is a holdover from older days. The early internet was incredibly Libertarian in population, and it's not hard to see why: the hackers had the run of the sandbox, and as a whole didn't want there to be any intervention which would keep them from building castles. That's also a mindset which trends towards the notion that there's no such thing as 'bad speech', which argument has been studied and disproven extensively over the last few decades.

I do feel like the current architects of the systems we're seeing carry at least a reflection of that stance, and we can see it in their paeans even now. (Looking at you, EFF.) So it's easier for them to be willfully blind to the harms caused by bad actors with digital megaphones.

@siderea really enjoying this thread! Thanks! Not to the end yet but my immediate thought about this paragraph was: except, maybe, in adtech.. I think they've got the large scale social engineering down, with decades of experience pre-dating the web, take it very seriously indeed and are using their powers for evil :/

@siderea and that despite the fact that there are good books detailing proven patterns for building online communities...

@weekend_editor I might – I'll certainly check it out – I'm just a little dubious that I'm going to find value in anything that can be described with the framing of "those who are the community versus those who use the community".

I was going to explain why, but then an in vivo example showed up in the other reply you got. There is always somebody who will be along shortly to explain why some other perfectly prosocial and usefully contributing demographic aren't *really* members of a community because *they* somehow benefit from being members of that community, so they're just *using* it.

@siderea

> I'm just a little dubious that I'm going to find value in anything that can be described with the framing of "those who are the community versus those who use the community".

Fair enough.

In case it makes things easier, Chapman's thesis is rather close to what you've been saying.

(1) A community is often founded by creators and enthusiasts for their creations. Basically a bunch of people who make a particular thing and those REALLY into it.

(2) As a community scales, it attracts people who want to use membership to leverage their own social capital. These are "influencer" wannabees and the like. This is still pretty ok with everyone.

(3) Eventually, if it gets big enough, somebody figures out how to monetize it. Invariably the business people take over. This CAN be ok for a good long while, but the business pressures toward the dark triad are significant.

(4) Then Cory Doctorow's famous en-<mumble>-ification process takes hold. (Chapman wrote back in the halcyon days of 2010, so he would not have used this term.)

(5) The founders wonder how that happened AGAIN, and begin an exodus to a new community.

You spoke of griefers, which is certainly one way this manifests.

Chapman & Doctorow speak of the more or less inevitable economic pressures, whether social capital or monetary capital. The naïveté of many of us in our geekier mode makes us easier exploitation targets.

I've personally seen that cycle 3 or 4 times. It starts to look pretty familiar; the question is whether it's inevitable.

@siderea

> There is always somebody who will be along shortly to explain why some other perfectly prosocial and usefully contributing demographic aren't *really* members of a community because *they* somehow benefit from being members of that community, so they're just *using* it.

There are, alas, always those who wish to police boundaries. Even when that's a bad idea.

A thing in which I take some pride about my career was mentoring junior colleagues. I tried VERY hard not to say "no" when they wanted to cross a boundary. Instead, I explained (1) what the boundary was, (2) why it was a good thing it was there, and (3) what they had to do to cross it while retaining credibility. (Or that it was a bad boundary, and I'd be happy to smash it together with them.)

Now, the "in vivo example" wanted to point out commercially published authors as an example of using a social media for promotion.

Fine.

One example is @scalzi (or Twitter, or ...). He's an example of GOOD use of communities: he's polite, funny, and generally erudite. (Yes, he does promote his books; why would anyone expect otherwise?)

The 50% of the time he doesn't talk about his books (family, friends, politics, food, "whatever") is interesting. The rest is STILL interesting and informative. He's only gotten me to buy a book once that way. Still, the chatter attracts me.

(Another example is me. I only started using social media to promote my blog:

someweekendreading.blog/

See what I did there, using a post about promotion to promote? Not especially clever. But I hope not especially annoying.)

So our interlocutor's example of writers using social media CAN be a fine thing.

www.someweekendreading.blogSome Weekend Reading – Occasional tart thoughts of a grumpy old retired scientist, your humble Weekend Editor.Occasional tart thoughts of a grumpy old retired scientist, your humble Weekend Editor.

@siderea

BTW, it just came to my attention that Chapman is on Mastodon:

@Meaningness

So I should have at least cited him, and given him the chance to say something or other if he wants.