Collaborating on unfinished research through presentation questions

A few weeks ago, I returned from the Second Symposium on Applied Rhetoric. I had the honor of being on the planning committee for the Symposium, so I had the distinct pleasure of loving every minute of this thing I’d helped create and also slightly worrying every minute that something was going to go wrong. But lo! Nothing went wrong. Everything was great.

The Symposium on Applied Rhetoric (and its parent, uh, “thing,” The Applied Rhetoric Collaborative) is a little group of scholars that is interested in how rhetoric gets things done in the world. There’s a big focus on analysis of public rhetoric, service learning, and other types of research on how rhetoric influences and impacts the world at large.

It’s a hands-on conference in more than just topics–one of my favorite parts of the symposium is that it encourages people to bring incomplete projects, half-thoughts, and other imperfect ideas to workshop. Ending with “and then I don’t know where this goes next” is what we like to hear–because we can give constructive feedback and ideas about where it could go! The community is rigorous and engaged but also aware that each of us will have unfinished ideas at some point–so while there were moments of debate and questioning, there weren’t any gotcha questions. It was a collaborative experience, as the “organization” title suggests.

This aspect of the symposium made it the most fun I’ve ever had at a conference; instead of asking questions to poke holes or clarify, I was able to ask questions that might directly help a future draft of the paper. That’s fun! Contributing and collaborating on research, even in little ways, is so much fun.

We’ll be having our third symposium in June of 2020; if you’re interested in hearing more about the symposium or the collaborative, you can email me (Stephen.Carradini@gmail.com), ping me on Twitter, or contact me some other way. I’d love to talk more about it. In my next blog post, I’ll talk about what I presented at ARC.

The Positives of Social Platform Research for Social Platforms

I’ve signed a letter circulated by the Knight First Amendment Institute calling on Facebook to establish a Safe Harbor for research on the social media platform. (I am sure the signature will show up on the page soon.) As a researcher who has a published article about Facebook use on my CV, this is an issue directly related to the work that I do.

Facebook is rightfully concerned about Kogan-style research abuses, but shutting down the platform to all research because of one very bad case of abuse is a significant problem for anyone who uses Facebook. People need research on platforms, services, and governments that they are beholden to; research on these entities allows abuses to be exposed, problems to come to light, blindspots to be revealed, best practices to be discovered, and new uses to be documented.

Without academic research and its sibling data journalism, all we know about platforms, services and governments comes from what they are willing to tell us, people’s anecdotal experiences, and large-scale qualititative efforts. Having been a professional proprietor of PR, feature stories, and large qualitative interviewing projects, I can state firmly that these efforts are deeply important. They are necessary but not sufficient. The quantitative analyses that come from platform research are necessary to fill out the whole picture of a platform, service, or government.

Facebook would rather not have its data abused, but they also seem to not want to have lights shined on their dark corners. As a nigh-on-infinite stream of Facebook failures continues to come to light without the use of platform data, I’m sure that Facebook would rather not have what it sees as another angle of attack levied against it. They would rather shut the door. I recognize that as a legitimate fear. Their fear of being exposed is a legitimate fear, but that is only because they have lots to expose.

Their fear of being exposed is not as important as the protection of the people that use their platform and the protection of people affected by those who use their platform. Suicides related to Facebook bullying, the ongoing Rohingya genocide, livestreams of physical and sexual violence by attention seekers, and the revealing of vulnerable populations’ sensitive data to people/parties that can use that data against them are only a few of the life-and-death situations that Facebook finds itself unintentionally propagating. Facebook has not found answers to these problems, and these are just the problems we know about. Opening their data could expose even more problems, and I’m sure they would not want that to happen. But it’s imperative for the health and safety of their population that they do this.

Platform-level data access to researchers would allow researchers to help solve these known and unknown problems in the long run. While the research itself may not directly cause change, the knowledge that the research creates can be used by the platform itself to correct some of the errors of its ways. If the platform itself will not correct its ways, governments have been increasingly gearing themselves up to correct Facebook’s ways through regulation. Research could help these governments regulate effectively, and help begin working towards solutions for and mitigations of these problems for users.

Why would Facebook want to invite researchers in when the result would be more effective regulation? I would argue that, much like the aviation industry, Facebook should want regulation on itself. Outside regulation would keep people safe and give Facebook rails that they can lean on. If investors are upset that Facebook isn’t making as much money as it used to, Facebook can reply that it’s due to regulations–not due to anything they did. (In the macro view, Facebook is responsible for any regulations put on it by not doing the right thing and thus requiring regulations to be administered; but corporate earnings calls operate in slightly-to-massively different universes than the long-term ethical view of economic responsibility.)

Users whose actions are regulated or even regulated out of existence won’t be happy either, but Facebook can again point to the regulations as “not our decision.” For the user whose actions will be lightly affected or unaffected by Facebook regulations, having a safer Facebook would make many users happier and perhaps even want to use the platform more. And if more users are healthier and happier than they are now, that’s a win for the users and society at large. (There’s a question there about whether always getting more of what the algorithm thinks you want makes you happy and healthy, but that’s a story for another day.)

In short, Facebook should want to have happy, healthy users who enjoy the platform and use it in large amounts. Opening the platform data to researchers is a step toward that end; a step that comes with many difficulties in the short- and medium-term, but that could strengthen Facebook’s economic and existential prospects in the future. In light of its future, I strongly urge Facebook to open its platform to researchers.