There are two problems that are coming for Mastodon of which apparently an awful lot of people are unaware. These problems are coming for Mastodon not because of anything specific to Mastodon: they come to all growing social media platforms. But for some reason most people haven't noticed them, per se.
The first problem is that scale has social effects. Most technical people know that scale has technological effects. Same thing's true on the social side, too.
CC: @Gargron
For instance, consider the questions "How likely, statistically speaking, are you to run into your boss on this social media platform?" and "How likely, statistically speaking, are you to run into your mother on the social media platform?" While obviously there is wide individual variation based on personal circumstances, in general the answer to those questions is going to be a function of how widespread adoption is in one's communities.
Thing is, people behave differently on a social media platform when they think they might run into their boss there. People behave differently when they think they might run into their mother.
And it's not just bosses and mothers, right? I just use those as obvious examples that have a lot of emotional charge. People also behave differently depending on whether or not they think their next-door neighbors will be there (q.v. Nextdoor.com).
How people behave on a social media platform turns out to be a function of whom they expect to run into – and whom they actually run into! – on that social media platform. And that turns out to be a function of how penetrant adoption is in their communities.
And a problem here is that so many assume that the behavior of users of a given social media platform is wholly attributable to the features and affordances of that social media platform!
It's very easy to mistake what are effects of being a niche or up-and- coming platform for something the platform is getting right in its design.
The example I gave about people behaving differently depending on what the likelihood is they estimate of running into certain other parties in their lives is not the only example of how scale affects how people engage with a social media platform. There are others that I know about, and probably lots I don't.
For instance, tech people are probably aware of the phenomenon that virus writers are generally more attracted to writing viruses for platforms that have more users. This is one of the main reasons that there are (and have always been) fewer viruses written against the macOS than Windows.
You've probably never thought of it this way – mad props to the article in Omni I read a long time ago that brought this to my attention – but writing a virus is a kind of *griefing*. Like in a game. It's about fucking up other people's shit for kicks and giggles, if not for profit, and doing so at scale.
Well, griefers – people who are motivated by enjoying griefing as a pastime – are going to be more drawn to bigger platforms with more people to fuck with.
Deliberate malicious obnoxiousness and trolling varies not *linearly* with population size, but *geometrically* or worse.
Or put another way, a social media platform can avoid a certain amount of social griefing just by being small, and therefore not worth the time of griefers who are looking for bigger fish to fry. As that platform grows, it loses that protection.
So you can't tell, not for sure, how good a platform's systems are for managing that kind of griefing until it gets big enough to really start attracting griefing at scale.
So that's one problem: there are simply social size effects, that affect how people behave on a social media platform, so as the platform grows in adoption, how people behave on it will change. Usually not in ways that are thought of as for the better, because being a niche platform can avoid various social problems that can no longer be avoided as it grows.
The other problem I think is even more fascinating.
When a social media platform is founded, there are filter effects on who joins that platform. But as a social media platform grows, those filters – some of them – fall away.
When I talk about filters, I mean things like the following famous examples:
* When Facebook was founded, it was only for students at universities; one could only sign up for it with a college email address. Consequently, Facebook's early userbase was almost entirely college students – with all that implies for socioeconomic class.
* When G+ was founded, it was initially opened to Google employees, and used an invite code system for rollout, such that overwhelmingly its early users were people in the same social worlds as Googlers.
* In the heyday of USENET, the vast majority of internet users, at all, were college students who are majoring in technical topics.
These social spaces, consequently, inherited (in the object oriented sense) the social norms of the demographics that initially populated them.
Regardless of the specifics of what different platforms' initial userbases are, one of the fascinating consequence of having such filters is a higher level of social homogeneity.
I know it doesn't seem like a very high level of social homogeneity when you're in it. "What are you talking about, lady?! We have both emacs users AND vi users!"
But in a way that is largely invisible at the time to the people in it, they're in a kind of cultural bubble. They don't realize that a certain amount of social interaction is being lubricated by a common set of assumptions about how people behave and how people *should* behave.
Now they may not like those assumptions very much, they may not be very nice assumptions or ones they find are very agreeable. But they're *known*. Even if unconsciously or inchoately. And that turns out to count for a lot, in terms of reducing conflict or making it manageable.
But, of course, as a social media platform grows, those filters change or fall away.
Facebook expanded enrollment to high school students, then dropped the requirement of an educational affiliation all together.
AOL, which at the time was mailing physical install media to every mailing address in the United States, unsolicited, repeatedly, plugged itself into USENET and opened the floodgates in an event that is referred to as the September That Never Ended.
(For those of you who don't know, that term refers to the fact that previously, large numbers of clueless users who didn't know how to operate USENET only showed up at the beginning of the American academic year. AOL not being tied to the academic calendar and having large numbers of new users every day, effectively swamped the capacity of USENET culture to assimilate new members by sending a September's worth of cluelessness every month forever thereafter.)
Additionally, as a social media platform becomes more popular, it becomes more worth the effort to get over the speed bumps that discourage adoption.
We've already seen this with regards to Mastodon. Where previously an awful lot of people couldn't be bothered to figure out this whole federation, picking-a-server thing to set up an account in the first place, of late it is seemed much more worth the effort of sorting that out, not just because Twitter sucks and its users are looking for an alternative, but because Mastodon has become more and more attractive the more and more people use it.
So people who once might have been discouraged from being Mastodon users are no longer discouraged, and that itself is the reduction of a filter. Mastodon is no longer filtering quite so much for people who are unintimidated by new technologies.
Now you might think that's a good thing, you might think that's a bad thing: I'm just pointing out it IS a thing.
Over time, as a social media platform becomes more and more popular, its membership starts reflecting more and more accurately the full diversity of individuals in a geographic area or linguistic group.
That may be a lovely thing in terms of principles, but it comes with very real challenges – challenges that, frankly, most people are caught entirely by surprise by, and are not really equipped to think about how to deal with.
Most people live in social bubbles to an extent that is hard to overstate. Our societies allow a high degree of autonomy in deciding with whom to affiliate, so we are to various degrees afforded the opportunity to just not deal with people that are too unpleasant for us to deal with. That can include people of cultures we don't particularly like, but it also includes people who are just personally unpleasant.
Many years ago, at the very beginning of my training to become a therapist, I was having a conversation with a friend (not a therapist) about the challenges of personal security for therapists.
She said, of some example I gave of a threat to therapist safety, "But surely no reasonable person would ever do that!"
"I'm pretty sure," I replied, "the population of people with whom therapists work is not limited only to people who are *reasonable*."
I think of that conversation often when discussing social media. Many of the people who wind up in positions to decide how social media platforms operate and how to try to handle the social problems on them are nice, middle class, college educated, white collar folks whose general attitude to various social challenges is "But surely no reasonable person would ever do that!"
As a social media platform grows, and its user base becomes more and more reflective of the underlying society it is serving, it will have more and more users on it who behave in ways that the initial culture will not consider "reasonable".
This is the necessary consequence of having less social homogeneity.
Some of that will be because of simple culture clash, where new users come from other cultures with other social expectations and norms. But some of that will be because older users weren't aware they were relying on the niche nature of the platform to just *avoid* antisocial or poorly socialized people, and don't really have a plan for what to do about them when they show up in ever greater numbers, except to leave, only now they *can't* leave, not with impunity, because they're invested in the platform.
So the conflict level goes up dramatically.
As a side note, one of the additional consequences of this phenomenon – where a growing social media platform starts having a shifting demographic that is more and more culturally and behaviorally diverse, and starts reflecting more and more accurately the underlying diversity of the society it serves, and consequently has more and more expressed conflict – is that a rift opens up between the general mass of users, on the one hand, and the parties that are responsible for the governance of the social media platform, on the other.
This is where things go really sour.
That's because the established users and everyone in a governance position – from a platform's moderators to its software developers to its corporate owners or instance operators – wind up having radically different perspectives, because they are quite literally witnesses to different things.
The established users, who are still within their own social bubbles, have an experience that feels to them like, "OMG, where did all these jerks come from? The people responsible for running this place should do something to fix it – things were fine here the other day, they need to just make things like they used to be. How hard could it be?" They are only aware of the problems that they encounter personally, or are reported to them socially by other users or through news media coverage of their platform.
But the parties responsible for governance get the fire hose turned on them: they get to hear ALL the complaints. They get an eagle's eye view of the breadth and diversity and extent of problems.
Where individual users see one problem, and don't think it's particularly difficult to solve, the governance parties see a huge number of problems, all at once, such that even if they were easy to solve, it would still be overwhelming just from numbers.
But of course they're not necessarily as easily solved as the end users think. End users think things like, "Well just do X!" where the governance team is well aware, "But if we did X, that might solve it for you, but it would make it worse for these other people over here having a different problem."
The established users wind up feeling bewildered, hurt, and betrayed by the lack of support around social problems from the governance parties, and, it being a social media platform, they're usually not shy about saying so. Meanwhile, the governance parties start feeling (alas, not incorrectly) their users are not sympathetic to what they're going through, how hard they're working, how hard they're trying, and how incredibly unpleasant what they're dealing with is. They start feeling resentful towards their users, and, in the face of widespread intemperate verbal attacks from their users, sometimes become contemptuous of them.
The dynamic I just described is, alas, the best case scenario. Add in things like cultural differences between the governance parties and the users, language barriers, good old fashioned racism, sexism, homophobia, transphobia, etc, and any other complexity, and this goes much worse, much faster.
For anyone out there who is dubious about this difference in perspective between the governance parties and the end users, I want to talk about the most dramatic example of it that I personally encountered.
There used to be on LiveJournal a "community" (group discussion forum) called IIRC "internet_sociology". Pretty much what it sounded like, only it was way more interested in the sociology (and anthropology) of LiveJournal itself, of course, than any of the rest of the internet.
Anyways, one day in, IIRC, the late 00s, somebody posted there a dataviz image, of the COMPLETE LiveJournal social graph.
And that was the moment that English-speaking LiveJournal discovered that there was an entirely other HALF of LJ that was Russian-speaking, of which they knew nothing, and to which there was almost no social connection.
For LJ users who had just discovered the existence of ЖЖ, it was kind of like discovering the lost continent of Atlantis. The datavis made it very clear. It represented the social graph of the platform they were on as two huge crescents barely connected, but about the same size. And all along, the governance parties of LJ were also the governance parties of ЖЖ.
And it turns out, absolutely unsurprisingly, LJ and ЖЖ had very different cultures, because they had had different adoption filters to start out with. LJ initially had been overwhelmingly adopted by emo high school students as a *diary* platform (LJ once jokingly announced it was adding an extra server just to index the word "depression".) ЖЖ had initially been adopted by politically active adults – average age, in their 30s – as a *blogging* platform.
Turns out, also absolutely unsurprisingly, these two populations of users wanted *very* different features, and had quite different problems.
One of the ways LJ/ЖЖ threaded that needle was to make some features literally contingent upon the character set a user picked. LiveJournal literally had "Cyrillic features": features that had nothing to do with the character set itself, but that only turned on for an account if it elected that character set.
Also unsurprisingly, when a Russian company bought LJ/ЖЖ from an American company, the governance parties started prioritizing the ЖЖ users' issues and feature requests, to the considerable confusion and distress of the LJ users who were unaware of the entire existence of ЖЖ. "Why on Earth would we want a feature that does *this*? Why would they think we would want it? Is LJ *insane*? What are they trying to make this place?" No, whatever feature it was actually was a pretty attractive one for someone who's a political blogger trying to maximize their reach, i.e. ЖЖ users.
You can see how a pretty enormous rift can open up between end users, who have literally no clue as to some of the most basic facts of the platform – like, say, entirely 50% of the user base is radically different from them in language and culture and usage patterns and needed affordances – and the governance parties who are trying to juggle all the anvils, some of which are on fire.
@siderea My experience aligns with what you're saying. I had difficulties in a wiki community where the people responsible for conflict resolution were volunteers-that is, self appointed without having any kind of qualifications or even referrals from other users. Some of them were unsophisticated, dare I say uneducated and gullible.
The wiki boasts that it protects its contributers but frankly if I'd had any idea how open to harrassment, stalking and downright stupidity I would be, I would never have got started.
So - volunteer, unqualified moderators are a problem and as you say, will be burdened even more in a large user group.
In part, the troubles I had were due to cultural differences. The difference between people speaking Russian and English may be obvious, but the differences between Australians and Americans are more subtle and misunderstandings are exacerbated if the American party thinks there are no differences and what you said means exactly what they think it means.
Only part-way through this thread, and recognising a lot of this. :D
I'd be interested in being a fly-on-the-wall to a conversation between yourself and @ifixcoinops who has been running Improbable Island for over a decade.
He noted some of his experiences here:
@siderea
There's something else that makes a difference too: the distributed nature of the platform. On a centralized platform, you can't go elsewhere without leaving the platform entirely. On a distributed platform, you can go somewhere else (to a different instance) without leaving the platform or losing your connections. Any instance that allows griefers and trolls free rein will find everyone else going elsewhere.
@siderea
That interacts with the ability to block people or entire instances. The only ones who disfavor blocking are the trolls/griefers who are increasingly isolated on their own instances. That keeps the incentives for the developers in favor of improving blocking features, since it's the receiving side where those are used and that's where the trolls aren't. That makes the platform unattractive to the trolls, they don't like it when they can only annoy each other.
@tknarr This is the kind of comment that makes me dislike Mastodon.
Look, you have the opportunity to learn something new. Somebody is saying something that you haven't already heard. But instead, you're more interested in talking about what you are already certain about.
The jingoistic pro-Mastodon nonsense you are spouting is garbage, and I feel, about its making its appearance in my space, about the same I would if somebody threw trash in my wading pool.
@siderea @tknarr
Thank you for the great thread, and yes, I can now appreciate that sentiment.
This idea of "this is new and corporate Enshitification was the only problem we ever had" is an oversimplification that's all too easy to assume. Just trying to read up on the details from all sides of the recent TBS and Tech . LGBT situation and the preceding PV collapse, we've only seen early precursors and those that are warning based on past experiences are bulldozed over with #MastodonExceptionism
@siderea These are both excellent points. What can we do about them?
@jupiter_rowland I didn't mean to suggest that we cling to homogeneity and resist diversification. I meant, can we develop OER to teach us how to be more understanding as our spaces become more diverse, for how to deescalate, for healthy boundary setting, etc?
One of the best things about ActivityPub/Mastodon is that its ran by The People. In this case, it can also be the worst. So, servers promote hate as much as the corporate ones.
@siderea
Much the same applies in broader software contexts, often with highly-trained specialists
*working in their own field*.
@siderea I really appreciate your thread. In particular as it is much easier to think about remedies while it's still quite as opposed to when the storm starts.
Things I would add to the list: People, trying to game the system to spread their knowledge are not only attracted by the size of the platform but also the influence people have elsewhere, think journalists or polititions active here.
@siderea people trying to sell stuff are more likely to get active depending on the income of those on a platform.
At the end of the day it's a game theory game between those trying to spread their message and the people building this platform.
@siderea one thing that I believe could change the game for the fediverse is that it's inherently distributed. Smaller instances should be easier to moderate, though at some point in time one may be forced to work with white lists instead of blocking bad instances.
@siderea Very familiar. Around 2000, I worked for a company that made tiny little animations/presentations that played inside email messages. They set up a system to allow people to edit these and send them on without an account, and in a meeting I commented, "But people will use this for porn and Spam". General hilarity ensued. It was a nice company, but there was some gentle ribbing.
The only time I ever saw them mentioned after I left was because they were overrun by porn...
@siderea in some sense everything will have some kind of homogeinity. All the internet is made by and for humans, for example. But it depends on the context, it will be very diverse; in the example you gave, emacs and vim users, they would be in different groups for editor-related discussions, but if something can unite them, then they would have other topics to share or interact with, making it heterogeneous and diverse --at some level.
@siderea I see echoes in here of the challenge of diversifying a school. I think members of a cloistered community can really underestimate the degree to which the unwritten rules that keep a place inequitable and racially-homogenous are the same structures that make it feel like home to the ones who are there. Racism without racists, as it were.
@siderea I've put this before as "Nascent forms of social media and of the Internet were basically designed for a tech aristocracy that didn’t know it was an aristocracy and that thought that the network wouldn’t necessarily be exploited because for the users at the time, it just wasn't."
An "open marketplace of ideas" can only exist in a situation where we have social coordination enough to say "genocide doesn't belong as one of these ideas." Where, to extend the metaphor, you can’t sell hemlock in place of carrot greens.
When the internet was first taking off, you had to have a university connection to access it, which meant that all of the policing of the trolling was taking place outside of the internet, and in the larger institutions.
When a university Sysadmin is told "Sort these people out, or we will disconnect the whole university from the internet!", then the individuals causing the problems, get dealt with very quickly.
@siderea of course Usenet had the alt.* Hierarchy which was coined as being the home of "anarchists, lunatics, and terrorists" which I always found amusingly self-deprecating but in the lens of your posts, you can feel it coming across as a lot more judgy
@siderea you raise a lot of thoughtful points here. There's definitely a "this is a niche place with less griefers" element at play, however a few design factors may help with "natural' policing by the community:
- no algo-driven timeline to "reward" nasty griefers with attention
- the risk of instances being defederated is similar to the example of university admins being told to deal with a problem (time will tell how effective but it's a similar parallel)
- longer text limits allow and I'll argue encourage more nuance than pithy angry one-liners
- again, not being corporate sponsored means there's no marketing push with hard dollars to encourage mass joiners, keeping growth slower
@siderea @Gargron this is amazing — thank you.
To my followers, because I’m going to boost my reply — The thread is a tremendous introduction asking us to consider what it would mean to intentionally *Engineer* social media spaces.
What doing that might entail as a scientific process. How this has been done in other fields. Why we should do it, and what happens if we don’t.
And who we could seek out, with experience of entire related fields, who could help.
@benjohn Absolutely this. People tend to say "You can't solve social problems with technology", and while there's a core of truth in situations like this it's more wrong than right.
What I've found frustrating so far about so much of Fedi is how little thoughtful design there is. There's a lot of kneejerk changes and taking the easiest-to-implement path, leaving an awful lot of debt on the route from here to "good". So, I'm a lot more pessimistic than I want to be about a designed future.
1/
@benjohn The "thoughtful" element being particular tough, as social systems are not simple, they need a proper systems approach, strategically thinking ahead "if this is done, how will people react, get around it, where will that lead?", and an understanding that even then not everything will be foreseeable, and keeping future flexibility open. The discussion up-thread of learning from other systems is key here - make the most of others' mistakes, don't repeat them!
2/2
Sure. What about it? What I most remember is how at the time it was widely seen as a pale shadow of LiveJournal's model (still being used at Dreamwidth) by those of us who had experience of both, though I don't remember the details of where G+'s circles fell down.
To this day, no platform has come up with a better Access Control Lists (ACLs) model than LJ.
(Now, I think I can improve on it! I have some ideas. That's a separate rant.)
But if you're looking to make suggestions for Mastodon to adopt an ACLs model, you might want to leverage the LJ model as an example (if one does not want to access LJ because of various good reasons people don't want to access LJ, Dreamwidth is still right there.)
Now, all that said, there's some interesting sociotechnical reasons that Federation makes ACLs very hard.
@siderea you mention social engineering is a term already in use for other purposes. I kept thinking what you describe sounds more like an online extension of civil engineering, which (as I understand it at least) is concerned with building and maintaining environments for people to coexist successfully (safely, prouctively)
@dabblecode Oh, no. (Heh. Source: was briefly a civil engineering major, once upon a time.)
Civil engineering is entirely about matter, building with it, and keeping it from falling down once it's been built up. It's not at all concerned with the social-behavioral effect of what it builds on the people that encounter it, except that it is not supposed to kill people, except when it is.
The social-behavioral stuff? That's "design", and that's the architect's job. And then the architect hires the civil engineer to figure out how to put up the thing they designed so it doesn't fall down.
One of the worthiest threads I’ve read on Mastodon in five years.
Thank you for thinking through all this and for the call to think, communicate, and get involved. Program evaluation and social science methods are central to my profession and it’s lovely to see them called out here.
@siderea Please copy this brilliant thread to a blog post or article somewhere so it's easier to link back to later. It is too good to get lost to the backscroll ether.
@siderea
I've been moderating/governing online spaces for 25 years, and I've come to one clear conclusion: You will never make a space comfortable for everyone. It will not happen. Accept it, and decide explicitly what kind of person you want to feel "safe" and which you don't.
And anyone that is outside the Overton Window you want to have, exclude fast, exclude hard. (cf the "Nazi bar" story that's gone around, but for much milder things than Nazis.)
@siderea More mundane example: If you want a Star Trek fan forum, cool. If someone keeps starting threads about Game of Thrones instead... they don't belong there. Really, they simply need to leave. Not because GoT is bad, it's just Not Appropriate Here(tm).
This of course gets much harder at scale, as you note. Which is why mega-scale networks... maybe aren't even the right model in the first place.
@Crell Oh, hard agree! I actually have two other rants in me illuminating exactly this.
The issue isn't mega-scale networks, it's whether or not there are *boundaries* that demarcate *topic spaces*. In THIS space we discuss Star Trek, and in THAT space we discuss Game Of Thrones. Both spaces can be on the same network or even on the same server, but it has to be possible for the people who want to see only Star Trek to not see any GoT, and the people who want to see only GoT to not see any ST.
Not because either is offensive to the other, but because neither group will feel that it is in control of its experience unless that is true, and they are likely to start treating the others as interlopers and get defensive and hostile.
@Crell it's like Frost wrote: Good fences make good neighbors.
@siderea And sometimes you want a common identity in both the Trek and GoT areas, and sometimes you don't. (Eg, professionally-related topic and your BDSM topics.)
In some ways, Discord has gotten this most-right so far. (Or least-wrong.) (Common login, you're "in" each server at once, but can only see shared servers, not people's non-shared servers, lots of mod configurations, bot support, etc.) There's probably something to learn from that.
@Crell That is the plan! Maybe it'll be up by tomorrow? I'll try to get back to you with a link when I do.
@siderea
This was such a long thread that it literally took me days to read it. But it was so much worth it. Thank you for taking the time to write this.