There are two problems that are coming for Mastodon of which apparently an awful lot of people are unaware. These problems are coming for Mastodon not because of anything specific to Mastodon: they come to all growing social media platforms. But for some reason most people haven't noticed them, per se.
The first problem is that scale has social effects. Most technical people know that scale has technological effects. Same thing's true on the social side, too.
CC: @Gargron
For instance, consider the questions "How likely, statistically speaking, are you to run into your boss on this social media platform?" and "How likely, statistically speaking, are you to run into your mother on the social media platform?" While obviously there is wide individual variation based on personal circumstances, in general the answer to those questions is going to be a function of how widespread adoption is in one's communities.
Thing is, people behave differently on a social media platform when they think they might run into their boss there. People behave differently when they think they might run into their mother.
And it's not just bosses and mothers, right? I just use those as obvious examples that have a lot of emotional charge. People also behave differently depending on whether or not they think their next-door neighbors will be there (q.v. Nextdoor.com).
How people behave on a social media platform turns out to be a function of whom they expect to run into – and whom they actually run into! – on that social media platform. And that turns out to be a function of how penetrant adoption is in their communities.
And a problem here is that so many assume that the behavior of users of a given social media platform is wholly attributable to the features and affordances of that social media platform!
It's very easy to mistake what are effects of being a niche or up-and- coming platform for something the platform is getting right in its design.
The example I gave about people behaving differently depending on what the likelihood is they estimate of running into certain other parties in their lives is not the only example of how scale affects how people engage with a social media platform. There are others that I know about, and probably lots I don't.
For instance, tech people are probably aware of the phenomenon that virus writers are generally more attracted to writing viruses for platforms that have more users. This is one of the main reasons that there are (and have always been) fewer viruses written against the macOS than Windows.
You've probably never thought of it this way – mad props to the article in Omni I read a long time ago that brought this to my attention – but writing a virus is a kind of *griefing*. Like in a game. It's about fucking up other people's shit for kicks and giggles, if not for profit, and doing so at scale.
Well, griefers – people who are motivated by enjoying griefing as a pastime – are going to be more drawn to bigger platforms with more people to fuck with.
Deliberate malicious obnoxiousness and trolling varies not *linearly* with population size, but *geometrically* or worse.
Or put another way, a social media platform can avoid a certain amount of social griefing just by being small, and therefore not worth the time of griefers who are looking for bigger fish to fry. As that platform grows, it loses that protection.
So you can't tell, not for sure, how good a platform's systems are for managing that kind of griefing until it gets big enough to really start attracting griefing at scale.
So that's one problem: there are simply social size effects, that affect how people behave on a social media platform, so as the platform grows in adoption, how people behave on it will change. Usually not in ways that are thought of as for the better, because being a niche platform can avoid various social problems that can no longer be avoided as it grows.
The other problem I think is even more fascinating.
When a social media platform is founded, there are filter effects on who joins that platform. But as a social media platform grows, those filters – some of them – fall away.
When I talk about filters, I mean things like the following famous examples:
* When Facebook was founded, it was only for students at universities; one could only sign up for it with a college email address. Consequently, Facebook's early userbase was almost entirely college students – with all that implies for socioeconomic class.
* When G+ was founded, it was initially opened to Google employees, and used an invite code system for rollout, such that overwhelmingly its early users were people in the same social worlds as Googlers.
* In the heyday of USENET, the vast majority of internet users, at all, were college students who are majoring in technical topics.
These social spaces, consequently, inherited (in the object oriented sense) the social norms of the demographics that initially populated them.
Regardless of the specifics of what different platforms' initial userbases are, one of the fascinating consequence of having such filters is a higher level of social homogeneity.
I know it doesn't seem like a very high level of social homogeneity when you're in it. "What are you talking about, lady?! We have both emacs users AND vi users!"
But in a way that is largely invisible at the time to the people in it, they're in a kind of cultural bubble. They don't realize that a certain amount of social interaction is being lubricated by a common set of assumptions about how people behave and how people *should* behave.
Now they may not like those assumptions very much, they may not be very nice assumptions or ones they find are very agreeable. But they're *known*. Even if unconsciously or inchoately. And that turns out to count for a lot, in terms of reducing conflict or making it manageable.
But, of course, as a social media platform grows, those filters change or fall away.
Facebook expanded enrollment to high school students, then dropped the requirement of an educational affiliation all together.
AOL, which at the time was mailing physical install media to every mailing address in the United States, unsolicited, repeatedly, plugged itself into USENET and opened the floodgates in an event that is referred to as the September That Never Ended.
(For those of you who don't know, that term refers to the fact that previously, large numbers of clueless users who didn't know how to operate USENET only showed up at the beginning of the American academic year. AOL not being tied to the academic calendar and having large numbers of new users every day, effectively swamped the capacity of USENET culture to assimilate new members by sending a September's worth of cluelessness every month forever thereafter.)
Additionally, as a social media platform becomes more popular, it becomes more worth the effort to get over the speed bumps that discourage adoption.
We've already seen this with regards to Mastodon. Where previously an awful lot of people couldn't be bothered to figure out this whole federation, picking-a-server thing to set up an account in the first place, of late it is seemed much more worth the effort of sorting that out, not just because Twitter sucks and its users are looking for an alternative, but because Mastodon has become more and more attractive the more and more people use it.
So people who once might have been discouraged from being Mastodon users are no longer discouraged, and that itself is the reduction of a filter. Mastodon is no longer filtering quite so much for people who are unintimidated by new technologies.
Now you might think that's a good thing, you might think that's a bad thing: I'm just pointing out it IS a thing.
Over time, as a social media platform becomes more and more popular, its membership starts reflecting more and more accurately the full diversity of individuals in a geographic area or linguistic group.
That may be a lovely thing in terms of principles, but it comes with very real challenges – challenges that, frankly, most people are caught entirely by surprise by, and are not really equipped to think about how to deal with.
Most people live in social bubbles to an extent that is hard to overstate. Our societies allow a high degree of autonomy in deciding with whom to affiliate, so we are to various degrees afforded the opportunity to just not deal with people that are too unpleasant for us to deal with. That can include people of cultures we don't particularly like, but it also includes people who are just personally unpleasant.
Many years ago, at the very beginning of my training to become a therapist, I was having a conversation with a friend (not a therapist) about the challenges of personal security for therapists.
She said, of some example I gave of a threat to therapist safety, "But surely no reasonable person would ever do that!"
"I'm pretty sure," I replied, "the population of people with whom therapists work is not limited only to people who are *reasonable*."
I think of that conversation often when discussing social media. Many of the people who wind up in positions to decide how social media platforms operate and how to try to handle the social problems on them are nice, middle class, college educated, white collar folks whose general attitude to various social challenges is "But surely no reasonable person would ever do that!"
As a social media platform grows, and its user base becomes more and more reflective of the underlying society it is serving, it will have more and more users on it who behave in ways that the initial culture will not consider "reasonable".
This is the necessary consequence of having less social homogeneity.
Some of that will be because of simple culture clash, where new users come from other cultures with other social expectations and norms. But some of that will be because older users weren't aware they were relying on the niche nature of the platform to just *avoid* antisocial or poorly socialized people, and don't really have a plan for what to do about them when they show up in ever greater numbers, except to leave, only now they *can't* leave, not with impunity, because they're invested in the platform.
So the conflict level goes up dramatically.
As a side note, one of the additional consequences of this phenomenon – where a growing social media platform starts having a shifting demographic that is more and more culturally and behaviorally diverse, and starts reflecting more and more accurately the underlying diversity of the society it serves, and consequently has more and more expressed conflict – is that a rift opens up between the general mass of users, on the one hand, and the parties that are responsible for the governance of the social media platform, on the other.
This is where things go really sour.
That's because the established users and everyone in a governance position – from a platform's moderators to its software developers to its corporate owners or instance operators – wind up having radically different perspectives, because they are quite literally witnesses to different things.
The established users, who are still within their own social bubbles, have an experience that feels to them like, "OMG, where did all these jerks come from? The people responsible for running this place should do something to fix it – things were fine here the other day, they need to just make things like they used to be. How hard could it be?" They are only aware of the problems that they encounter personally, or are reported to them socially by other users or through news media coverage of their platform.
But the parties responsible for governance get the fire hose turned on them: they get to hear ALL the complaints. They get an eagle's eye view of the breadth and diversity and extent of problems.
Where individual users see one problem, and don't think it's particularly difficult to solve, the governance parties see a huge number of problems, all at once, such that even if they were easy to solve, it would still be overwhelming just from numbers.
But of course they're not necessarily as easily solved as the end users think. End users think things like, "Well just do X!" where the governance team is well aware, "But if we did X, that might solve it for you, but it would make it worse for these other people over here having a different problem."
The established users wind up feeling bewildered, hurt, and betrayed by the lack of support around social problems from the governance parties, and, it being a social media platform, they're usually not shy about saying so. Meanwhile, the governance parties start feeling (alas, not incorrectly) their users are not sympathetic to what they're going through, how hard they're working, how hard they're trying, and how incredibly unpleasant what they're dealing with is. They start feeling resentful towards their users, and, in the face of widespread intemperate verbal attacks from their users, sometimes become contemptuous of them.
The dynamic I just described is, alas, the best case scenario. Add in things like cultural differences between the governance parties and the users, language barriers, good old fashioned racism, sexism, homophobia, transphobia, etc, and any other complexity, and this goes much worse, much faster.
For anyone out there who is dubious about this difference in perspective between the governance parties and the end users, I want to talk about the most dramatic example of it that I personally encountered.
There used to be on LiveJournal a "community" (group discussion forum) called IIRC "internet_sociology". Pretty much what it sounded like, only it was way more interested in the sociology (and anthropology) of LiveJournal itself, of course, than any of the rest of the internet.
Anyways, one day in, IIRC, the late 00s, somebody posted there a dataviz image, of the COMPLETE LiveJournal social graph.
And that was the moment that English-speaking LiveJournal discovered that there was an entirely other HALF of LJ that was Russian-speaking, of which they knew nothing, and to which there was almost no social connection.
For LJ users who had just discovered the existence of ЖЖ, it was kind of like discovering the lost continent of Atlantis. The datavis made it very clear. It represented the social graph of the platform they were on as two huge crescents barely connected, but about the same size. And all along, the governance parties of LJ were also the governance parties of ЖЖ.
And it turns out, absolutely unsurprisingly, LJ and ЖЖ had very different cultures, because they had had different adoption filters to start out with. LJ initially had been overwhelmingly adopted by emo high school students as a *diary* platform (LJ once jokingly announced it was adding an extra server just to index the word "depression".) ЖЖ had initially been adopted by politically active adults – average age, in their 30s – as a *blogging* platform.
Turns out, also absolutely unsurprisingly, these two populations of users wanted *very* different features, and had quite different problems.
One of the ways LJ/ЖЖ threaded that needle was to make some features literally contingent upon the character set a user picked. LiveJournal literally had "Cyrillic features": features that had nothing to do with the character set itself, but that only turned on for an account if it elected that character set.
Also unsurprisingly, when a Russian company bought LJ/ЖЖ from an American company, the governance parties started prioritizing the ЖЖ users' issues and feature requests, to the considerable confusion and distress of the LJ users who were unaware of the entire existence of ЖЖ. "Why on Earth would we want a feature that does *this*? Why would they think we would want it? Is LJ *insane*? What are they trying to make this place?" No, whatever feature it was actually was a pretty attractive one for someone who's a political blogger trying to maximize their reach, i.e. ЖЖ users.
You can see how a pretty enormous rift can open up between end users, who have literally no clue as to some of the most basic facts of the platform – like, say, entirely 50% of the user base is radically different from them in language and culture and usage patterns and needed affordances – and the governance parties who are trying to juggle all the anvils, some of which are on fire.
There's a little exercise one can do, if one is an enduser of a social media platform (or for that matter a governance party to a niche social media platform that has yet to hit the upslope of the diversity wave) and one wants a better sense of what governance parties have to deal with. If you've ever had a retail job dealing with the general public, just remember what the general public was like to deal with when you waited tables or ran a register or took orders or answered phones.
And if you've never had such a job yourself, or it's been a while, take yourself to a place like Reddit's r/TalesFromRetail or r/TalesFromYourServer and check out the sorts of things people who deal with the general public find themselves having to deal with.
And then reflect on this: all those irrational, entitled, belligerent, obnoxious people are loose in the world, and as your social media platform grows, it will eventually hoover THEM up from the bottom of the pool.
Because that – and worse (so very much worse) – is what your governance parties have to deal with.
I don't just mean governance parties have to deal with rude people being rude to them. First of all the problem is so much worse than mere rudeness, and social problems extend far beyond problems between two parties being in some sense in conflict. But secondly, and more importantly, it's *made their problem* when someone is "rude" to someone else. They don't just have to deal with obnoxious people being obnoxious at them, they have to in some sense do something about obnoxiousness in general, and are often put in the position of having to show up and confront the obnoxious person in some sense, or otherwise do something to frustrate the obnoxious person, which will probably not make them less obnoxious and also bring the governance party to the attention of the obnoxious person.
And if you are yourself a governance party who finds yourself having more and more difficulty empathizing with and respecting your end users, maybe remember what it was like to *be* an end user, and to largely be helpless to handle all sorts of social problems oneself, and to be stuck relying on authorities who may be unsympathetic, actively hostile, and/or just both clueless and clue-resistant.
I mean, just reflect on what it was like to be a Twitter user over the last year. Only don't let yourself use the cop out of "But you don't have to be a Twitter user, you can leave Twitter."
A lot of people, especially on the Fediverse, wind up being governance parties precisely because they don't want to be disempowered anymore. They want to be the people who make decisions about how to solve social problems on their social media platform of choice, and Mastodon/etc makes that much easier than trying to get a job with Twitter's Trust and Safety team.
So it's worth remembering if you are a governance party on the Fediverse, that's great for you, you're all empowered by that arrangement – but your end users are still end users. They get search if you choose to give them search. They still rely on your sitting in judgment of the reports they file on other users to take action on bad actors on their instance. They still experience themselves as largely just as disempowered as they were on Twitter. They have a choice of what lord's fields to till, but they're still peasants.
But I digress.
Returning to my larger point about the two problems that are coming for Mastodon: I'm seeing a lot of people make a lot of assumptions about how well things are working, in terms of solving social problems, that are basically predicated on not knowing that these two problems are bearing down on us.
This puts me in the weird position of actually arguing against empiricism. I'm usually a big fan of "throw it against the wall and see if it sticks" experimentalism as a remedy for head in the clouds theorizing.
But this is really a situation in which foresight is desperately necessary.
It is simply not accurate to extrapolate the efficacy of various attempts to solve social problems on Mastodon based on how well they've worked so far.
When you're climbing an adoption curve, past performance is not a guarantee of future results.
A couple decades ago, Clay Shirky gave a talk which he then published as an essay, "The Group Is Its Own Worst Enemy", about how over and over and over again people who develop online social spaces get surprised by things that happened on their online space – thing which had happened previously on OTHER parties' online social spaces, and which those social spaces' governance parties had attempted to warn others about.
Now, I have a bunch of reservations about specific details in that essay, but he was sure right about how over and over and over again Bad Things happen to social platforms, and the governance parties who lived through them try to warn others, and they're pretty reliably ignored.
Maybe we could not do that this time?
[break over, resuming]
Now, I certainly don't have a proposed one right answer to what a social media platform should be doing to solve all of these ensuing problems, and I certainly hope nobody thought I did.
But what I do have to propose is a set of attitudes and approaches to building out a social media platform to try to avoid some of the bad outcomes that other platforms have experienced.
My biggest point here is to simply not have a kind of foolish hubris of thinking that because something hasn't been a problem *so far*, that it's been solved.
As with so many things, I think it helps enormously to look into the history of previous attempts to get advanced warning of the circumstances one may find oneself in. And, of course in the case of social media, by "may" I mean "almost certainly will".
There are things that most definitely do not need to be surprises anymore.
And I want to point out something else that's probably crucial to learning from past mistakes.
When we build a social media platform – when we build anything to allow people to interact in the internet – we are doing something very like building a planned city. We are making decisions about the structures through which people will flow and move and rest and encounter one another and interact with one another.
When architects are designing physical buildings and when urban planners are laying out physical cities, they make decisions about physical structures with the intention of those structures shaping human behavior. People who build amphitheaters are people who want there to be public addresses that many people here, whether political speech or entertaining theater. People who build temples are people who want there to be collective religious worship. People who build roads want there to be travel.
Of course architects can choose to build buildings to meet other criteria, besides the effects on the people that interact with them. They can choose to make buildings that support the environment, or save the owners' money, or achieve some political end. They can also build buildings to have social effects not just through their affordances but through aesthetics, such as being beautiful to improve a neighborhood's appearance or to aggrandize an aristocracy.
But primarily buildings are built to be used, and as such they are tools, and we judge them, as we do all tools, by how fit they are for their purpose, whatever that might be.
And the purposes of buildings are to afford various ways of people interacting or avoiding interacting.
So architects think a lot about that. It's a whole thing.
Those who put together social media platforms need to think about the same sort of thing.
We need to be very conscious that the decisions that are made of how a platform works are decisions that affect how the people who use that platform will interact.
There should be a kind of intentionality – which is something I think Mastodon is doing way better at than a lot of social media projects – around functionality decisions.
But that intentionality has to go beyond merely meaning well. Good intentions poorly informed result in bad outcomes that were never intended but are, nevertheless, still bad.
There is a lot to be said for realizing that decisions for how social media platforms *work* are deliberate attempts to shape – to *engineer* really – human social life on a huge scale. On a scale so huge in fact, that it is not wrong to describe it as trying to *engineer societies*.
It's unfortunate that the term "social engineering" has a previous meaning as a slang term among computer programmers for a kind of attack on a system that leverages human frailty as opposed to faults in the software, because this – the design of social media platforms – is truly *social engineering*.
From where I sit, with a foot in both the technological and the social sciences, it seems really clear to me that there is no general sense that there is such a field as the engineering of online society. Not their underlying technologies, but the use of technological deployment to instantiate social spaces, that bring about certain social realities.
This is not a thing that is taken seriously. To the contrary, it's treated quite lightly.
The social media world is filled with people just pulling ideas out of their asses and hoping it all works out.
Folks who have been around the block a few times in a governance role have started amassing a body of lore. Case studies, observations they made in the trenches.
At the very least, availing oneself of what they have to share is a good first step.
But if we were to take this seriously as engineering, well, that suggests a few things, doesn't it?
It suggests we get a little bit more sciency about this. It suggests we start imposing a little bit of rigor.
Engineers tackle well-specified problems, and if the problems they are asked to tackle are not well-specified, they'll either nope out or they'll come up with their own spec.
It would probably do us good to spec out problems we think we're solving more precisely.
I cannot tell you how many conversations I have seen about the topic of "moderation" and how necessary it is in which nobody has ever bothered to set down what exactly it is that they think a moderator is supposed to accomplish.
I mean, it's all of them. I've been on the internet since the 1980s, and I have never seen anyone stop and actually talk about what they thought moderators were trying to do or should try to do.
That makes it a little tricky to evaluate whether or not moderators are given adequate tools to do their jobs. What with not actually having any agreement or understanding or even specification of what those jobs are.
This specific example is on my mind in part because of reading @kissane's article on Facebook's role in the genocide of the Rohingya in Myanmar. One of the things it mentions is that Facebook's internal apparatus for what we might call moderation was its "bullying-focused 'Compassion Team'". Like many social media platforms constructed by the sorts of people who construct social media platforms, Facebook construed the problem of moderation being one of preventing or discouraging interpersonal conflict on the platform.
But the problem unfolding in the Burmese-language parts of Facebook was not people disagreeing with one another or expressing conflict with one another. It was their *agreeing* with one another.
Agreeing to go kill their neighbors.
This was not something that was even on Facebook's radar, apparently.
This raises some very fundamental and quite interesting questions about what the role of moderation is on a social media platform. Is it the job of a social media platform to prevent people from using it to collaborate to commit crimes?
Historically, a lot of people who have put together social media platforms have insisted it is absolutely not the job of the platform – or the people who run it – to do that.
But if it's not the job of the platform to do that, whose job is it, when a platform, by its affordances, makes real world crimes – horrendous, very serious "real-world" crimes like actual genocide – not just more likely, but so much more likely they are effectively enabling a crime that wouldn't otherwise happen?
Why should our societies – our larger, meat-world societies – tolerate the building and operating of social media platforms that destabilize them and are detrimental to them?
Or put another way, why should our societies tolerate the existence of *irresponsibly* designed and operated social media platforms, that increase violence and other antisocial behavior?
So it turns out the failure of internet culture to actually have a discourse around what even moderators are supposed to be doing is a literally lethal mistake.
And this example is merely one wrinkle in the much, much larger conversation about what moderation is, and the diversity of things that it can be, and maybe should be.
A conversation that has to happen before you can have the conversation that goes, "Okay, of the things that moderation can be, which things do we think it needs to be on our platform, and what do we need to do, in the design of our platform, to bring it into existence and make it work the way we think it should?"
Consider what has unfolded recently with Reddit turning off its API, such that tools its moderators relied on are no longer available to them. Reddit's structure is that it allows anyone to start their own forum and gives them authority to moderate it however – to a first approximation – they see fit. But it doesn't provide the tools necessary – nor, any longer, allow third parties to provide those tools – such that many moderator functions can be performed, so there's a limit to what kinds of moderation can happen there, and how well it can be acquitted. This has literally changed what kinds of conversations and what kinds of forums can happen on Reddit.
Now, I'm not party to what's happening inside Reddit. I don't know the logic of their decisions. But I do know a whole lot of very thoughtful Reddit users who have spaces they moderate on Reddit have explained in great detail and length ("Concision is not our brand." - a mod from r/AskHistorians explaining on Twitter about this very thing) what their needs are and why they were objecting to Reddit turning off the API.
Reddit corporately decided that supporting those affordances was unimportant, or at least less important than something else that conflicted with them.
Reddit made a design decision that changed the nature of what moderation *could* mean on Reddit. They reduced its scope. That, in turn, changed how moderators could interact with the users they moderated, and that in turn changed how users interacted with one another.
Like I said, I'm not party to Reddit's internal corporate thinking. But I think it's a pretty good educated guess to say: Reddit's decision was not based upon what would optimize Reddit's social functions. When Reddit made this decision, I'm feeling pretty confident it was not a *social engineering* decision. It was not made to make Reddit function better in some social sense. Nobody made this decision thinking, "Actually, reducing the capacity of moderators to do tasks that are part of moderation will actually improve the social reality of Reddit in this particular way."
At very best, this decision was made to optimize something else in full awareness, "Yes, this will be detrimental to Reddit's social world, but it can't be helped, because of other considerations that outrank quality of social engineering right now."
But of course, the social effect on Reddit might have been simply dismissed, or discounted.
It's a pretty common thing for people to scoff at the idea that the affordances of a platform have something to do with how people behave on it, and that if you make the wrong decisions about how your platform operates you'll get outcomes that you won't like.
It's a pretty common thing for people to take the attitude, "Oh, geeze, what difference does it make whether this feature exists? People will do whatever they want to do anyway."
One of the things Mastodon has going for it is a userbase that mostly doesn't cop out like that. Mastodon is full of plenty of people who believe deeply that how the software works and what its affordances are actually a matter immensely to how social life on Mastodon unfolds.
The problem isn't convincing Mastodonians that these things matter – it's convincing them to not take their first impressions from traumatic experiences on Twitter as gospel truths.
It's probably pretty easy for Reddit executives to sniff and say, "Well you know the user base, they're making a big to-do about nothing; they'll figure out how to moderate without those tools, they're hardly critical."
Mastodonians are smarter than to do that, but we have a bit of a problem of falling down at the next step. It's great that people here don't scorn the idea that affordances matter to user behavior, but the next step is to actually find out how affordances actually do affect user behavior.
Like, we could imagine a more enlightened Reddit not cutting off its moderators at the knee by shutting down the API access they needed. But we could also imagine an even more enlightened Reddit than that, one that built its own versions of those moderator tools right into its own platform, so that moderators didn't have to use third party tools across the API.
But we could also imagine an even more enlightened Reddit than *that*.
We could imagine an alternate reality, even more enlightened Reddit, that not only had its own built-in moderator tools, it actually was concerned with the question of whether or not it had the right moderator tools, and how those tools affected how Reddit functioned, socially.
It might do things like hold focus group discussions with moderators, send anthropologists into various subs to observe the behavior of moderators and to virtually shadow moderators going about their moderating tasks, A/B test moderator tools, have opt-in betas of new moderator tools.
You know, basic grown-up company stuff, when a company actually cares about how its software functions.
But not just that.
We could imagine an ultra enlightened Reddit that actually has opinions about how it wants its social world to function, and actually makes decisions like, "We don't like some things that we think are maybe a product of moderators not providing 24/7 coverage of high volume groups, so we're going to find out if there's some way, or ways, plural, of solving that problem. We'll investigate whether there are things we can do to either facilitate moderators providing 24/7 coverage, or obviating the need for moderators to provide 24/7 coverage, and then we'll evaluate whether or not it worked to remedy the things that we saw as problems that we think are being caused by that."
In the social services field, this crucial last bit is called "program evaluation".
I think of it as sort of like calling one's shots in billiards. You decide what change you would like to see in the system you are designing for, you come up with an "intervention" (based on studying the problem, reading into other people's approaches to trying to solve the problem, and maybe doing a bunch of rounds of iterative experimentation), you decide, up front, how you will determine whether or not the problem has been solved, and then you implement the intervention, and then you check those criteria you previously identified as the ones that will determine whether or not the problem has been solved.
This is what I mean by rigor. This is pretty sciencey, no? It's not necessarily a controlled experiment, but it does have the form of an experiment. But it's not a *mere* experiment, either. It's not just a trial to see whether or not something will work. It's an attempt to actually do something that will work. With some slightly more rigorous testing as to whether it did.
Part of what makes it so rigorous is how formal it is, with that business of deciding in advance what the criteria of success will be. That in turn requires a certain amount of serious thinking about social phenomena, and actually getting explicit about things that, frankly, usually just get hand waved through when we're talking about social media platforms. Questions like "what even do we expect our moderators to be achieving?"
It means doing hard things like asking, "Okay, we want people to 'feel more safe': how will we be able to tell that people are feeling more (or less) 'safe'? If people were feeling more or less safe, how would we know? How would we be able to measure it? To observe it in the data? What is it that we are assuming will change in people's behavior based on how safe they feel?"
In the social sciences, this is called "operationalizing" an abstraction or concept or feeling.
One of the things you might notice from the fact that now twice I've mentioned there's other fields that have technical terminology that applies here: Oh hey. There are other fields that have clues that pertain. They have methodologies and other cool toys you might want to play with.
Engineering is, to a first approximation, applied science. So if you want to engineer socials, you might want to start hitting up the social scientists and people in other fields that apply social science.
This is something else Mastodon has going for it: it's got social scientists around here somewhere.
Now, I appreciate what I propose here has a bootstrapping problem. I don't know whether any of the decision makers about code, protocols, or individual instances have the capacity to enlist the help of the people on Mastodon with those professional clues, and I'm not sure Mastodon has the affordances to bring them together.
But all the other challenges of connecting those parties aside, there's the one of hubris, that I want to circle back to.
Part of the reason that so many social media platforms, over and over and over and over again, found themselves shocked and surprised by things happening to them that other, earlier, social media platforms went through (and tried to tell them about) was because they assumed they knew all there was that they needed to know. They assumed their knowledge was adequately complete. They didn't go seeking out information and counsel to guide them, because they thought they didn't need it.
It never even occurred to them they did.
And that's a kind of hubris. It's a kind of low-key, chill intellectual arrogance. It's not blustery, it doesn't brag. It just assumes.
And that's a problem.
Because here's the thing: the people with the social clues are not going to beat down the door to shove them down your throat.
The people with these clues – the people who've been involved in social media governance before, the social scientists, the people who do evaluation of social programs, the urban planners and the architects, the psychological professionals – they're going to do the same things they have always done. Write news articles and blog posts and social media threads, give talks conferences and conventions, teach classes and attend symposia and colloquia, conduct scientific experiments and publish in research journals, and generally cast their messages in bottles upon the seas of information in the hopes that they will fetch up on the shores of those who would benefit by them.
They're not going to come and staple these clues to your forehead for you.
If you want them, you're going to have to *go get them*.
And nobody goes to get these resources, nobody seeks out this expertise, who does not move past the naive arrogance of assuming there's nothing that they need to learn, that they have this social media platform thing all figured out already.
What I am hoping to achieve by this thread is to tantalize you with the evidence that there are things to know which you probably don't yet know, but would, if you only knew, like very much to know. Things that would benefit you to know.
I am hoping to enlist your curiosity in tackling to the floor the assumption you might harbor in your breast that this social media thing really isn't all that hard, you just do it in the right way, and which way is the right one is really obvious.
I am attempting to entice you with the knowledge that is out there (and insofar as there are experts in these things here on Mastodon, in here with us) into wanting that knowledge enough to go looking for it.
And to ask for it.
All of you.
This message isn't just for "the people in charge".
For one thing, this is a *federated* system. This means the odds that you, personally, might be a "person in charge" in some sense go way, way up. And if not today, maybe tomorrow.
Furthermore, some instances are straight up democracies. Everybody on them is a person in charge.
But much more importantly, you have a voice. And if you're on Mastodon you probably use it.
I am hoping to convince you to use that voice not just to call for remedies you are certain will work to solve problems you haven't really specified. I'm inviting you to engage with curiosity questions like, "I wonder what the pros and cons of this are?" and "I wonder what it is that is giving me that impression, and how I might check out whether or not it is true?" and "I wonder if there are any other social media platforms that came up with a solution to this?"
I'd like to encourage you to use your voice to ask questions, like "Are we using the same definition of 'unacceptable behavior'?" and "What problem do you see your suggestion fixing?" and "What are the examples you are imagining or remembering when you make that suggestion?"
And I'd also like to encourage you *not* to use your voice – to remember to listen, to observe, to contemplate. You may have heard that old saying about communication being a two-way street: that is in fact false. Communication is a narrow one-way bridge that traffic has to take turns crossing. If you're sending, you're not receiving. If the teacup of your comprehension seems too full of your prior understandings too preciously savored, nobody else is going to pour you a serving from their own teapot.
Like I said, nobody's going to come and force clues down your throat. If you seem not to want them, you won't be given them.
The most interesting thing we can do with our voices is to open up spaces of discourse. And that's what I would most like to enlist you in.
I would love to see emerge on Mastodon, finally at long last, a discourse about social media platforms that centers social engineering as an actual thing. That is based on the ideas that we should be curious how things actually work and check our hypotheses against reality, that we should take responsibility for how our virtual built environment shapes the society that flows through it and the society in which it reposes, that there are things worth knowing here, things worth finding out, and interesting people to talk to.
Fundamentally I am hoping to *interest you* in the entire topic of social engineering, in this new sense that I mean it. Because if you are interested in it and if you talk about it with other people who are interested in it, and interest in it grows, then it will feed on itself and it will grow as a field.
And if it grows as an idea, if it grows as a topic of interest, it will connect up people, it will enlist more people, it will drive curiosity, and discovery, and experimentation, and design.
And it will change the culture in which the people who do make the decisions make them. It will make it a culture in which it is normal to consider questions like "How will we know if we succeed at solving this problem?" and "What prior art is there in this problem space?" and "What sorts of challenges are outstanding problems in the field right now?" and "What is the effect of our platform on the larger society its members belong to?"
It seems likely to me such a social environment will have beneficial effects on the social media platforms that emerge from it.
But, I confess, I haven't operationalized that yet.
(Fin)
@irenes Oooh, thanks! I will check that out!
@siderea
Thank you so much for this gorgeous thread!
I will try @mastoreaderio unroll on it.
@dasgrueneblatt It is now up as a blog post, at https://siderea.dreamwidth.org/1829989.html
@siderea Thank you so much for taking the time to write this. There's alot to unpack. I'm an admin as well so lots to think about here.
@siderea Okay this is a great thread and all, and very well laid out, but we TRIED that perspective and it didn't effectively CHANGE anything.
Look at the history of full-text search. It was debated EXTENSIVELY through 2017-2018, whether to yes or no, what effect it has on Twitter and how much of an effect, what effect we expect it to have here...
and we eventually settled on not doing it. Which lasted like 4 years until a large amount of people from Twitter came at once, and really wanted full-text search, and when the volume hit a pitch after a month or two Gargron flipped it and now we have full-text search.
And like. Maybe I missed the full thrust of the debates but I don't think that was theory-based? I don't think there was engagement with the objections from before in light of new information or theories, or perspectives or priorities. I think it just, with enough pressure, was changed.
(FL/OSS might be less democratic/uniform-consensus-based than you'd think, maybe.)
@siderea (I mean this honestly -- I DON'T know. I think it would be valuable to look back for those discussions, from this perspective.)
The other issue I worry about is, this perspective asks us to gather evidence by looking at social dynamics SEPARARED from their context, which is doable with some distancing, but like. This approach is fundamentally about self-consciously experimenting on ourselves, which is really asking a lot in terms of distancing. One person's "unjustifiable harassment campaign" is another's "no, this person genuinely has a pattern of creating dangerous social dynamics around themself". (And I say this as someone who's pretty sure I've held opinions on BOTH sides of that.) Debating what to do structurally requires a shared agreement about what happened in the past, IN PARTICULAR in socially-intense interactions/events. I don't know how much we can rely on that.
@siderea Thank you so much for this great thread! You've given me a lot to think about.
I have been moderating all kinds of online spaces since the early 00s. Forums, FB groups, wikis, etc etc. All of that experience will inform the construction of the moderation tools on the fediverse platform I'm building (https://join.piefed.social/).
Your thread helped me get more conscious about the bigger picture of those tools, the "social engineering" that is going on.
unsolicited thought after browsing your profile, feel free to pass over it!
the Beehaw community has a need for more robust moderation tools in the type of link-aggregator-forum-space where Lemmy and Kbin are, and they’ve done good amount of thinking around moderation and the social engineering of their community. They have a list of mod tool needs that could be something to consider for your platform too: https://discuss.online/post/12787
@sphygmus Great, thanks for that! I remember seeing it 6 months ago but I had no hope of finding it again. I'll definitely be taking those ideas seriously.
I wish PieFed was anywhere near ready for the Beehaw migration but realistically it's months away from being a contender.