There are two problems that are coming for Mastodon of which apparently an awful lot of people are unaware. These problems are coming for Mastodon not because of anything specific to Mastodon: they come to all growing social media platforms. But for some reason most people haven't noticed them, per se.
The first problem is that scale has social effects. Most technical people know that scale has technological effects. Same thing's true on the social side, too.
CC: @Gargron
For instance, consider the questions "How likely, statistically speaking, are you to run into your boss on this social media platform?" and "How likely, statistically speaking, are you to run into your mother on the social media platform?" While obviously there is wide individual variation based on personal circumstances, in general the answer to those questions is going to be a function of how widespread adoption is in one's communities.
Thing is, people behave differently on a social media platform when they think they might run into their boss there. People behave differently when they think they might run into their mother.
And it's not just bosses and mothers, right? I just use those as obvious examples that have a lot of emotional charge. People also behave differently depending on whether or not they think their next-door neighbors will be there (q.v. Nextdoor.com).
How people behave on a social media platform turns out to be a function of whom they expect to run into – and whom they actually run into! – on that social media platform. And that turns out to be a function of how penetrant adoption is in their communities.
And a problem here is that so many assume that the behavior of users of a given social media platform is wholly attributable to the features and affordances of that social media platform!
It's very easy to mistake what are effects of being a niche or up-and- coming platform for something the platform is getting right in its design.
The example I gave about people behaving differently depending on what the likelihood is they estimate of running into certain other parties in their lives is not the only example of how scale affects how people engage with a social media platform. There are others that I know about, and probably lots I don't.
For instance, tech people are probably aware of the phenomenon that virus writers are generally more attracted to writing viruses for platforms that have more users. This is one of the main reasons that there are (and have always been) fewer viruses written against the macOS than Windows.
You've probably never thought of it this way – mad props to the article in Omni I read a long time ago that brought this to my attention – but writing a virus is a kind of *griefing*. Like in a game. It's about fucking up other people's shit for kicks and giggles, if not for profit, and doing so at scale.
Well, griefers – people who are motivated by enjoying griefing as a pastime – are going to be more drawn to bigger platforms with more people to fuck with.
Deliberate malicious obnoxiousness and trolling varies not *linearly* with population size, but *geometrically* or worse.
Or put another way, a social media platform can avoid a certain amount of social griefing just by being small, and therefore not worth the time of griefers who are looking for bigger fish to fry. As that platform grows, it loses that protection.
So you can't tell, not for sure, how good a platform's systems are for managing that kind of griefing until it gets big enough to really start attracting griefing at scale.
So that's one problem: there are simply social size effects, that affect how people behave on a social media platform, so as the platform grows in adoption, how people behave on it will change. Usually not in ways that are thought of as for the better, because being a niche platform can avoid various social problems that can no longer be avoided as it grows.
The other problem I think is even more fascinating.
When a social media platform is founded, there are filter effects on who joins that platform. But as a social media platform grows, those filters – some of them – fall away.
When I talk about filters, I mean things like the following famous examples:
* When Facebook was founded, it was only for students at universities; one could only sign up for it with a college email address. Consequently, Facebook's early userbase was almost entirely college students – with all that implies for socioeconomic class.
* When G+ was founded, it was initially opened to Google employees, and used an invite code system for rollout, such that overwhelmingly its early users were people in the same social worlds as Googlers.
* In the heyday of USENET, the vast majority of internet users, at all, were college students who are majoring in technical topics.
These social spaces, consequently, inherited (in the object oriented sense) the social norms of the demographics that initially populated them.
Regardless of the specifics of what different platforms' initial userbases are, one of the fascinating consequence of having such filters is a higher level of social homogeneity.
I know it doesn't seem like a very high level of social homogeneity when you're in it. "What are you talking about, lady?! We have both emacs users AND vi users!"
But in a way that is largely invisible at the time to the people in it, they're in a kind of cultural bubble. They don't realize that a certain amount of social interaction is being lubricated by a common set of assumptions about how people behave and how people *should* behave.
Now they may not like those assumptions very much, they may not be very nice assumptions or ones they find are very agreeable. But they're *known*. Even if unconsciously or inchoately. And that turns out to count for a lot, in terms of reducing conflict or making it manageable.
But, of course, as a social media platform grows, those filters change or fall away.
Facebook expanded enrollment to high school students, then dropped the requirement of an educational affiliation all together.
AOL, which at the time was mailing physical install media to every mailing address in the United States, unsolicited, repeatedly, plugged itself into USENET and opened the floodgates in an event that is referred to as the September That Never Ended.
(For those of you who don't know, that term refers to the fact that previously, large numbers of clueless users who didn't know how to operate USENET only showed up at the beginning of the American academic year. AOL not being tied to the academic calendar and having large numbers of new users every day, effectively swamped the capacity of USENET culture to assimilate new members by sending a September's worth of cluelessness every month forever thereafter.)
Additionally, as a social media platform becomes more popular, it becomes more worth the effort to get over the speed bumps that discourage adoption.
We've already seen this with regards to Mastodon. Where previously an awful lot of people couldn't be bothered to figure out this whole federation, picking-a-server thing to set up an account in the first place, of late it is seemed much more worth the effort of sorting that out, not just because Twitter sucks and its users are looking for an alternative, but because Mastodon has become more and more attractive the more and more people use it.
So people who once might have been discouraged from being Mastodon users are no longer discouraged, and that itself is the reduction of a filter. Mastodon is no longer filtering quite so much for people who are unintimidated by new technologies.
Now you might think that's a good thing, you might think that's a bad thing: I'm just pointing out it IS a thing.
Over time, as a social media platform becomes more and more popular, its membership starts reflecting more and more accurately the full diversity of individuals in a geographic area or linguistic group.
That may be a lovely thing in terms of principles, but it comes with very real challenges – challenges that, frankly, most people are caught entirely by surprise by, and are not really equipped to think about how to deal with.
Most people live in social bubbles to an extent that is hard to overstate. Our societies allow a high degree of autonomy in deciding with whom to affiliate, so we are to various degrees afforded the opportunity to just not deal with people that are too unpleasant for us to deal with. That can include people of cultures we don't particularly like, but it also includes people who are just personally unpleasant.
Many years ago, at the very beginning of my training to become a therapist, I was having a conversation with a friend (not a therapist) about the challenges of personal security for therapists.
She said, of some example I gave of a threat to therapist safety, "But surely no reasonable person would ever do that!"
"I'm pretty sure," I replied, "the population of people with whom therapists work is not limited only to people who are *reasonable*."
I think of that conversation often when discussing social media. Many of the people who wind up in positions to decide how social media platforms operate and how to try to handle the social problems on them are nice, middle class, college educated, white collar folks whose general attitude to various social challenges is "But surely no reasonable person would ever do that!"
As a social media platform grows, and its user base becomes more and more reflective of the underlying society it is serving, it will have more and more users on it who behave in ways that the initial culture will not consider "reasonable".
This is the necessary consequence of having less social homogeneity.
Some of that will be because of simple culture clash, where new users come from other cultures with other social expectations and norms. But some of that will be because older users weren't aware they were relying on the niche nature of the platform to just *avoid* antisocial or poorly socialized people, and don't really have a plan for what to do about them when they show up in ever greater numbers, except to leave, only now they *can't* leave, not with impunity, because they're invested in the platform.
So the conflict level goes up dramatically.
As a side note, one of the additional consequences of this phenomenon – where a growing social media platform starts having a shifting demographic that is more and more culturally and behaviorally diverse, and starts reflecting more and more accurately the underlying diversity of the society it serves, and consequently has more and more expressed conflict – is that a rift opens up between the general mass of users, on the one hand, and the parties that are responsible for the governance of the social media platform, on the other.
This is where things go really sour.
That's because the established users and everyone in a governance position – from a platform's moderators to its software developers to its corporate owners or instance operators – wind up having radically different perspectives, because they are quite literally witnesses to different things.
The established users, who are still within their own social bubbles, have an experience that feels to them like, "OMG, where did all these jerks come from? The people responsible for running this place should do something to fix it – things were fine here the other day, they need to just make things like they used to be. How hard could it be?" They are only aware of the problems that they encounter personally, or are reported to them socially by other users or through news media coverage of their platform.
But the parties responsible for governance get the fire hose turned on them: they get to hear ALL the complaints. They get an eagle's eye view of the breadth and diversity and extent of problems.
Where individual users see one problem, and don't think it's particularly difficult to solve, the governance parties see a huge number of problems, all at once, such that even if they were easy to solve, it would still be overwhelming just from numbers.
But of course they're not necessarily as easily solved as the end users think. End users think things like, "Well just do X!" where the governance team is well aware, "But if we did X, that might solve it for you, but it would make it worse for these other people over here having a different problem."
The established users wind up feeling bewildered, hurt, and betrayed by the lack of support around social problems from the governance parties, and, it being a social media platform, they're usually not shy about saying so. Meanwhile, the governance parties start feeling (alas, not incorrectly) their users are not sympathetic to what they're going through, how hard they're working, how hard they're trying, and how incredibly unpleasant what they're dealing with is. They start feeling resentful towards their users, and, in the face of widespread intemperate verbal attacks from their users, sometimes become contemptuous of them.
The dynamic I just described is, alas, the best case scenario. Add in things like cultural differences between the governance parties and the users, language barriers, good old fashioned racism, sexism, homophobia, transphobia, etc, and any other complexity, and this goes much worse, much faster.
For anyone out there who is dubious about this difference in perspective between the governance parties and the end users, I want to talk about the most dramatic example of it that I personally encountered.
There used to be on LiveJournal a "community" (group discussion forum) called IIRC "internet_sociology". Pretty much what it sounded like, only it was way more interested in the sociology (and anthropology) of LiveJournal itself, of course, than any of the rest of the internet.
Anyways, one day in, IIRC, the late 00s, somebody posted there a dataviz image, of the COMPLETE LiveJournal social graph.
And that was the moment that English-speaking LiveJournal discovered that there was an entirely other HALF of LJ that was Russian-speaking, of which they knew nothing, and to which there was almost no social connection.
For LJ users who had just discovered the existence of ЖЖ, it was kind of like discovering the lost continent of Atlantis. The datavis made it very clear. It represented the social graph of the platform they were on as two huge crescents barely connected, but about the same size. And all along, the governance parties of LJ were also the governance parties of ЖЖ.
And it turns out, absolutely unsurprisingly, LJ and ЖЖ had very different cultures, because they had had different adoption filters to start out with. LJ initially had been overwhelmingly adopted by emo high school students as a *diary* platform (LJ once jokingly announced it was adding an extra server just to index the word "depression".) ЖЖ had initially been adopted by politically active adults – average age, in their 30s – as a *blogging* platform.
Turns out, also absolutely unsurprisingly, these two populations of users wanted *very* different features, and had quite different problems.
One of the ways LJ/ЖЖ threaded that needle was to make some features literally contingent upon the character set a user picked. LiveJournal literally had "Cyrillic features": features that had nothing to do with the character set itself, but that only turned on for an account if it elected that character set.
Also unsurprisingly, when a Russian company bought LJ/ЖЖ from an American company, the governance parties started prioritizing the ЖЖ users' issues and feature requests, to the considerable confusion and distress of the LJ users who were unaware of the entire existence of ЖЖ. "Why on Earth would we want a feature that does *this*? Why would they think we would want it? Is LJ *insane*? What are they trying to make this place?" No, whatever feature it was actually was a pretty attractive one for someone who's a political blogger trying to maximize their reach, i.e. ЖЖ users.
You can see how a pretty enormous rift can open up between end users, who have literally no clue as to some of the most basic facts of the platform – like, say, entirely 50% of the user base is radically different from them in language and culture and usage patterns and needed affordances – and the governance parties who are trying to juggle all the anvils, some of which are on fire.
There's a little exercise one can do, if one is an enduser of a social media platform (or for that matter a governance party to a niche social media platform that has yet to hit the upslope of the diversity wave) and one wants a better sense of what governance parties have to deal with. If you've ever had a retail job dealing with the general public, just remember what the general public was like to deal with when you waited tables or ran a register or took orders or answered phones.
And if you've never had such a job yourself, or it's been a while, take yourself to a place like Reddit's r/TalesFromRetail or r/TalesFromYourServer and check out the sorts of things people who deal with the general public find themselves having to deal with.
And then reflect on this: all those irrational, entitled, belligerent, obnoxious people are loose in the world, and as your social media platform grows, it will eventually hoover THEM up from the bottom of the pool.
Because that – and worse (so very much worse) – is what your governance parties have to deal with.
I don't just mean governance parties have to deal with rude people being rude to them. First of all the problem is so much worse than mere rudeness, and social problems extend far beyond problems between two parties being in some sense in conflict. But secondly, and more importantly, it's *made their problem* when someone is "rude" to someone else. They don't just have to deal with obnoxious people being obnoxious at them, they have to in some sense do something about obnoxiousness in general, and are often put in the position of having to show up and confront the obnoxious person in some sense, or otherwise do something to frustrate the obnoxious person, which will probably not make them less obnoxious and also bring the governance party to the attention of the obnoxious person.
And if you are yourself a governance party who finds yourself having more and more difficulty empathizing with and respecting your end users, maybe remember what it was like to *be* an end user, and to largely be helpless to handle all sorts of social problems oneself, and to be stuck relying on authorities who may be unsympathetic, actively hostile, and/or just both clueless and clue-resistant.
I mean, just reflect on what it was like to be a Twitter user over the last year. Only don't let yourself use the cop out of "But you don't have to be a Twitter user, you can leave Twitter."
A lot of people, especially on the Fediverse, wind up being governance parties precisely because they don't want to be disempowered anymore. They want to be the people who make decisions about how to solve social problems on their social media platform of choice, and Mastodon/etc makes that much easier than trying to get a job with Twitter's Trust and Safety team.
So it's worth remembering if you are a governance party on the Fediverse, that's great for you, you're all empowered by that arrangement – but your end users are still end users. They get search if you choose to give them search. They still rely on your sitting in judgment of the reports they file on other users to take action on bad actors on their instance. They still experience themselves as largely just as disempowered as they were on Twitter. They have a choice of what lord's fields to till, but they're still peasants.
But I digress.
Returning to my larger point about the two problems that are coming for Mastodon: I'm seeing a lot of people make a lot of assumptions about how well things are working, in terms of solving social problems, that are basically predicated on not knowing that these two problems are bearing down on us.
This puts me in the weird position of actually arguing against empiricism. I'm usually a big fan of "throw it against the wall and see if it sticks" experimentalism as a remedy for head in the clouds theorizing.
But this is really a situation in which foresight is desperately necessary.
It is simply not accurate to extrapolate the efficacy of various attempts to solve social problems on Mastodon based on how well they've worked so far.
When you're climbing an adoption curve, past performance is not a guarantee of future results.
A couple decades ago, Clay Shirky gave a talk which he then published as an essay, "The Group Is Its Own Worst Enemy", about how over and over and over again people who develop online social spaces get surprised by things that happened on their online space – thing which had happened previously on OTHER parties' online social spaces, and which those social spaces' governance parties had attempted to warn others about.
Now, I have a bunch of reservations about specific details in that essay, but he was sure right about how over and over and over again Bad Things happen to social platforms, and the governance parties who lived through them try to warn others, and they're pretty reliably ignored.
Maybe we could not do that this time?
You will probably like David Chapman's essay, "Geeks, MOPs, and Sociopaths".
It's about how communities always get invaded by those who wish to USE the community instead of BE the community.
@weekend_editor I might – I'll certainly check it out – I'm just a little dubious that I'm going to find value in anything that can be described with the framing of "those who are the community versus those who use the community".
I was going to explain why, but then an in vivo example showed up in the other reply you got. There is always somebody who will be along shortly to explain why some other perfectly prosocial and usefully contributing demographic aren't *really* members of a community because *they* somehow benefit from being members of that community, so they're just *using* it.
> I'm just a little dubious that I'm going to find value in anything that can be described with the framing of "those who are the community versus those who use the community".
Fair enough.
In case it makes things easier, Chapman's thesis is rather close to what you've been saying.
(1) A community is often founded by creators and enthusiasts for their creations. Basically a bunch of people who make a particular thing and those REALLY into it.
(2) As a community scales, it attracts people who want to use membership to leverage their own social capital. These are "influencer" wannabees and the like. This is still pretty ok with everyone.
(3) Eventually, if it gets big enough, somebody figures out how to monetize it. Invariably the business people take over. This CAN be ok for a good long while, but the business pressures toward the dark triad are significant.
(4) Then Cory Doctorow's famous en-<mumble>-ification process takes hold. (Chapman wrote back in the halcyon days of 2010, so he would not have used this term.)
(5) The founders wonder how that happened AGAIN, and begin an exodus to a new community.
You spoke of griefers, which is certainly one way this manifests.
Chapman & Doctorow speak of the more or less inevitable economic pressures, whether social capital or monetary capital. The naïveté of many of us in our geekier mode makes us easier exploitation targets.
I've personally seen that cycle 3 or 4 times. It starts to look pretty familiar; the question is whether it's inevitable.
> There is always somebody who will be along shortly to explain why some other perfectly prosocial and usefully contributing demographic aren't *really* members of a community because *they* somehow benefit from being members of that community, so they're just *using* it.
There are, alas, always those who wish to police boundaries. Even when that's a bad idea.
A thing in which I take some pride about my career was mentoring junior colleagues. I tried VERY hard not to say "no" when they wanted to cross a boundary. Instead, I explained (1) what the boundary was, (2) why it was a good thing it was there, and (3) what they had to do to cross it while retaining credibility. (Or that it was a bad boundary, and I'd be happy to smash it together with them.)
Now, the "in vivo example" wanted to point out commercially published authors as an example of using a social media for promotion.
Fine.
One example is @scalzi (or Twitter, or ...). He's an example of GOOD use of communities: he's polite, funny, and generally erudite. (Yes, he does promote his books; why would anyone expect otherwise?)
The 50% of the time he doesn't talk about his books (family, friends, politics, food, "whatever") is interesting. The rest is STILL interesting and informative. He's only gotten me to buy a book once that way. Still, the chatter attracts me.
(Another example is me. I only started using social media to promote my blog:
https://www.someweekendreading.blog/
See what I did there, using a post about promotion to promote? Not especially clever. But I hope not especially annoying.)
So our interlocutor's example of writers using social media CAN be a fine thing.
BTW, it just came to my attention that Chapman is on Mastodon:
So I should have at least cited him, and given him the chance to say something or other if he wants.