A few weeks ago, I returned from the Second Symposium on Applied Rhetoric. I had the honor of being on the planning committee for the Symposium, so I had the distinct pleasure of loving every minute of this thing I’d helped create and also slightly worrying every minute that something was going to go wrong. But lo! Nothing went wrong. Everything was great.
The Symposium on Applied Rhetoric (and its parent, uh, “thing,” The Applied Rhetoric Collaborative) is a little group of scholars that is interested in how rhetoric gets things done in the world. There’s a big focus on analysis of public rhetoric, service learning, and other types of research on how rhetoric influences and impacts the world at large.
It’s a hands-on conference in more than just topics–one of my favorite parts of the symposium is that it encourages people to bring incomplete projects, half-thoughts, and other imperfect ideas to workshop. Ending with “and then I don’t know where this goes next” is what we like to hear–because we can give constructive feedback and ideas about where it could go! The community is rigorous and engaged but also aware that each of us will have unfinished ideas at some point–so while there were moments of debate and questioning, there weren’t any gotcha questions. It was a collaborative experience, as the “organization” title suggests.
This aspect of the symposium made it the most fun I’ve ever had at a conference; instead of asking questions to poke holes or clarify, I was able to ask questions that might directly help a future draft of the paper. That’s fun! Contributing and collaborating on research, even in little ways, is so much fun.
We’ll be having our third symposium in June of 2020; if you’re interested in hearing more about the symposium or the collaborative, you can email me (Stephen.Carradini@gmail.com), ping me on Twitter, or contact me some other way. I’d love to talk more about it. In my next blog post, I’ll talk about what I presented at ARC.
Facebook is rightfully concerned about Kogan-style research abuses, but shutting down the platform to all research because of one very bad case of abuse is a significant problem for anyone who uses Facebook. People need research on platforms, services, and governments that they are beholden to; research on these entities allows abuses to be exposed, problems to come to light, blindspots to be revealed, best practices to be discovered, and new uses to be documented.
Without academic research and its sibling data journalism, all we know about platforms, services and governments comes from what they are willing to tell us, people’s anecdotal experiences, and large-scale qualititative efforts. Having been a professional proprietor of PR, feature stories, and large qualitative interviewing projects, I can state firmly that these efforts are deeply important. They are necessary but not sufficient. The quantitative analyses that come from platform research are necessary to fill out the whole picture of a platform, service, or government.
Facebook would rather not have its data abused, but they also seem to not want to have lights shined on their dark corners. As a nigh-on-infinite stream of Facebook failures continues to come to light without the use of platform data, I’m sure that Facebook would rather not have what it sees as another angle of attack levied against it. They would rather shut the door. I recognize that as a legitimate fear. Their fear of being exposed is a legitimate fear, but that is only because they have lots to expose.
Their fear of being exposed is not as important as the protection of the people that use their platform and the protection of people affected by those who use their platform. Suicides related to Facebook bullying, the ongoing Rohingya genocide, livestreams of physical and sexual violence by attention seekers, and the revealing of vulnerable populations’ sensitive data to people/parties that can use that data against them are only a few of the life-and-death situations that Facebook finds itself unintentionally propagating. Facebook has not found answers to these problems, and these are just the problems we know about. Opening their data could expose even more problems, and I’m sure they would not want that to happen. But it’s imperative for the health and safety of their population that they do this.
Platform-level data access to researchers would allow researchers to help solve these known and unknown problems in the long run. While the research itself may not directly cause change, the knowledge that the research creates can be used by the platform itself to correct some of the errors of its ways. If the platform itself will not correct its ways, governments have been increasingly gearing themselves up to correct Facebook’s ways through regulation. Research could help these governments regulate effectively, and help begin working towards solutions for and mitigations of these problems for users.
Why would Facebook want to invite researchers in when the result would be more effective regulation? I would argue that, much like the aviation industry, Facebook should want regulation on itself. Outside regulation would keep people safe and give Facebook rails that they can lean on. If investors are upset that Facebook isn’t making as much money as it used to, Facebook can reply that it’s due to regulations–not due to anything they did. (In the macro view, Facebook is responsible for any regulations put on it by not doing the right thing and thus requiring regulations to be administered; but corporate earnings calls operate in slightly-to-massively different universes than the long-term ethical view of economic responsibility.)
Users whose actions are regulated or even regulated out of existence won’t be happy either, but Facebook can again point to the regulations as “not our decision.” For the user whose actions will be lightly affected or unaffected by Facebook regulations, having a safer Facebook would make many users happier and perhaps even want to use the platform more. And if more users are healthier and happier than they are now, that’s a win for the users and society at large. (There’s a question there about whether always getting more of what the algorithm thinks you want makes you happy and healthy, but that’s a story for another day.)
In short, Facebook should want to have happy, healthy users who enjoy the platform and use it in large amounts. Opening the platform data to researchers is a step toward that end; a step that comes with many difficulties in the short- and medium-term, but that could strengthen Facebook’s economic and existential prospects in the future. In light of its future, I strongly urge Facebook to open its platform to researchers.
My friend and collaborator Chris Krycho is beginning work on a massive note-taking/referencing managing/word-processing/document formatting tool aimed at making things easier for writers of academic research, and he’s keeping people updated via email. (It’s temporarily called rewrite.) The goal is to replace the hodgepodge collection of research tools that each academic uses with a single piece of software that integrates all parts of the research process.
Instead of cramming academic writing into pre-existing tools, this project aims to make tools for writers of research first and foremost; just that approach alone made me excited. The software will be in the same space as Nota Bene; it will take a different approach to the all-in-one research software than that one does.
I am on the Researchers’ Advisory Board for this project because I routinely bang my head against the UI and functionality limitations of Microsoft Word and the frustrations of EndNote integration with Word, which this promises to replace. Anyone else who struggles with these limitations when trying to write research prose (or manipulate figures/charts/graphs, yikes yikes yikes) may be interested in this project.
My current contributions as part of the Researchers’ Advisory Board are to give feedback on ideas that Chris (and his team, once the project grows to that point) are mulling about from the perspective of a user, suggest ideas that might make the software more usable for academics, and discuss the other board members’ ideas to the same end. Later, I expect to be testing the functionality and user experience of the software as well.
No matter what comes out of it, I’m excited that academic researchers are the intended audience. That approach alone will make the project valuable. This is an enormous project, and Chris Krycho knows it. But even if it takes a few years, hey, what’s a few years between academics?
I was honored to have three speaking requests in March. I opened the month speaking to the Surge Network on “The Blessings and Curses of Social Media”. I was honored to present to pastors with suggestions on how to approach unhealthy and healthy social media use in their work with parishioners. The talk was recorded; I’ll post the audio and slides when the audio is available.
I then spoke to the entrepreneurs of Hustle PHX about how to create a marketing plan and how to draft a value proposition as part of that marketing plan. The slides from that presentation are at the end of the post. I always have a good time working with Hustle Phoenix, and I look forward to working with them again in the future as a speaker and mentor.
I finished my month speaking with advanced doctoral students in the Hugh Downs School of Human Communication about “How to Have a Professional Social Media Presence and Also Get Things Done.” Having only recently been a graduate student myself, I was very excited to share my expertise and pay it forward from all the support I received as a graduate student. There is video of this presentation too, and I will post that content with slides as soon as the video is online.
I enjoy getting out there and putting my expertise to use for other people, so I had a blast this month. I don’t have any public talks for a while–back to the academic talks for me.
While talking about YouTube in my Social Media for the Workplace class, a student asked if YouTube would someday eat cable. I decided to answer this intriguing question by nabbing Ben Thompson of Stratechery‘s art style and (implicitly) some of his ideas on aggregation/disaggregation. I ended up with this:
To explain what the board says: It used to be that all of the channels (upper left boxes) were collected in one bundle by the cable company and then delivered to the user. The cable company got money, the channels got money. If the user didn’t want a specific channel (box with star), doesn’t matter. You get those channels you don’t want along with the ones you do.
Now there’s some unbundling happening, with unbundlers like Sling. However, Sling really just serves up smaller bundles; users can pick the bundle they want, and perhaps get fewer channels they don’t want. Sling gets money, channels get money, you’re still stuck getting a channel or channels that you don’t want in order to get the ones you do (which is the arrow pointing at the top box in the bottom left diagram).
Now, anyone can set up their own streaming service (I picked Amazon Web Services as a stand-in for “set up your own streaming service”), and people are doing that–Disney, ESPN, BritBox, among others. They’ve cut out the middle man and can charge whatever they want. The reason I don’t expect that these channels will get re-collected back into one bundle is that no one wants to re-pay the middle man, after having just cut the middle man out.
However, maybe a third party comes along and creates some sort of user experience that includes all of your subscriptions in one place; you’d need to charge on top of the cost of the subscriptions individually, because you wouldn’t very easily be able to convince the companies to take a hit on their prices (aka pay the middle man) to do this. (The numbers next to the streaming services are real/theoretical subscription prices , the numbers on top are theoretical profit cuts if a middle man were reinserted. The +8 is the theoretical cost of this middle-man without cost cuts to the streaming services, bringing the 22 dollars to 30 a month. The 27 is a stray number that didn’t get developed.)
But the grail is still the ability to actually subscribe to any channel that you want, when you want it. I listed the various channels that I want: Fox (particularly the regional channels), ESPN, TNT, and TBS to watch my Oklahoma City Thunder and NBC Sports for the Olympics). I can’t subscribe to those individually yet.
I told students that YouTube offers this possibility in the future of truly a la carte channels. It will not be done through YouTube TV, which is basically cable. Instead, YouTube has a chance to be a haven for channels when the great cable unbundling happens: the moment when cable’s bundling strategy is no longer profit-making for the company and cable starts to take apart or entirely disintegrate their bundling strategy. (I am not a cable/over the top journalist and thus have no idea when this will happen, but it seems like it will happen at some point.)
When the great cable unbundling occurs someday, some channels that were surviving financially by receiving payments from the cable distributors for being bundled will disappear entirely. Some will be unbundled and left to fend for themselves as independent TV stations or cable add-ons (like HBO or Starz). Some will turn into independent subscription services. But, I imagine that some will become YouTube channels. If YouTube can offer monetization channels that are free from capriciousness (a big if), then moving a whole niche TV channel from cable to a YouTube channel might make sense.
And, if YouTube positions itself correctly, I suspect that some subscription services that start to falter (as some inevitably will, once we reach peak subscription and people won’t pay for the X channel on subscription) will see YouTube as a good option.
All of this is presaged on YouTube figuring out how not to be capricious with its ad(pocalypse) policy, which is by no means guaranteed. But there’s definitely a way that YouTube eats cable, someday. Maybe not, but maybe so.
I really enjoyed the mini-lecture but don’t see it happening again for a while, so I thought I’d post it here.
I’ve been co-hosting a podcast about the ethics of technology for five years now, but I’ve not written much about the ethics of technology. Part of that is due to the path my academic research took during my doctoral years, but part is due to saying much of what I wanted to say in the podcast. There’s now some space for me write a bit on the ethics of technology on my own.
The space comes from a new planning process in the podcast. We’re beginning to develop rhythms for the publication life of the podcast, as opposed to following the flow of the co-hosts’ professional and personal lives. Our lives have been in a near-constant state of flux over the last few years, and honestly it seems like that flux will continue for at least another year or two on my end. So instead of following the flow, we’re making a plan that accounts for our flux but also allows us to be more regimented in publishing.
As a result of that planning process, I now have some time where we’re intentionally not publishing the podcast, as we prepare for the next season. I’ve been thinking about a particular article I read regarding an AI facial recognition company a few months ago, and now have time to jot down some notes about it. I posted this as a Twitter thread first, but it’s worthwhile to preserve for posterity as well. So, a lightly edited version of the Twitter thread is below. I hope to post more thoughts of similar ethics-of-technology nature in this blog over the next months and years, particularly in the spaces between Winning Slowly recording/publishing. So, Morals Are Not a Luxury:
My initial thoughts stem from a statement by the co-founder of an AI company building China’s facial recognition technology: “We’re not really thinking very far ahead, you know, whether we’re having some conflicts with humans, those kinds of things,” he said. “We’re just trying to make money.”
That quote on its own is remarkably candid about how much ethics don’t factor in to their AI calculus. But it’s another one that struck me more, because it’s not news that technologists don’t think about consequences very much: “But at the Singapore Defense Technology Summit this summer, co-founder Tang stood before more than 400 military and government officials and contractors from all over the world and said SenseTime doesn’t have the luxury of worrying about some of AI’s moral quandaries …”
So morals are a “luxury” now? This is what raw, unfettered because-we-can-we-will-and-then-make-money looks like. Not everyone in tech is this bad about morals (or at least this candid about it) but I can’t help looking at that quote and thinking “There’s the problem.”
Those of us concerned about the ethics of technology have to keep working on all possible ends to get through to these companies that there is much more than money here at stake, and that there are some places tech can go that it shouldn’t. Federal policy, field-level self-regulation, peer pressure, international cooperation (international boundaries are definitely part of the problem here, but they can also be part of the solution), everything should be pursued and continued to be pursued post-haste.
Because until people like SenseTime no longer consider morals a luxury, they should be considered as having no guiding compass at all, literally no one to say “hey, this is a bad idea”; literally this is a company that will stop at nothing.
Sometimes critics are themselves critiqued for fear mongering, for making something out of nothing, for imagining ghosts where none exist. This is proof straight from the source that this is no longer imagining: these people admittedly do not have “the luxury” of morals.
In short, this is what market thinking in technology has developed: a company that thinks there might be moral problems with total and complete surveillance but doesn’t have time to “worry” about the “luxuries” of morals.
I’m pleased to announce that I have a new article out today. It’s called “Artist Communication: An Interdisciplinary Business and Professional Communication Course.” It focuses on a course I taught while a doctoral student at the Communication, Rhetoric, and Digital Media program at North Carolina State University. If you’re looking for a communications course focused on the work that artists do, this may give you some ideas about rationale, readings, assignments, and class planning. I see it as being particularly valuable for arts entrepreneurship programs and programs interested in workplace communication.
I would particularly like to thank Brian Penick of Musicians’ Desk Reference for getting my students the Reference, the fine editors and anonymous reviewers of Business and Professional Communication Quarterly for their help in improving the article, the fictional band Mouse Rat for appearing in this article, and the CRDM for letting graduate students design and teach their own class. That choice is a small sacrifice on the faculty’s part that pays huge dividends in the life of graduate students and young faculty.
Happy new year! As we charge ahead into 2019, I’d like to take a moment to look back on the year that was. The 2018 year was my first full year as an assistant professor at ASU. While I did complete some projects, I primarily laid groundwork that I hope will come to fruition in 2019. I’ll mention a few completions first.
In my Fall 2018 undergraduate and graduate versions of Social Media in the Workplace (TWC 422/522), I partnered with the City of Glendale via the Project Cities arm of the Sustainable Cities Network AZ. The students of the two courses worked on separate projects to help revamp the City’s social media presence. The students of the graduate course worked on developing a Policies, Rules, and Procedures document to guide the city’s day-to-day work, while the undergraduate students developed a full Social Media Plan that encompassed audience/demographic targeting, content goals, strategy, management details, an implementation schedule, and budget considerations.
The culmination of these semester-long projects was a presentation to stakeholders from the City of Glendale; students presented a poster and an oral presentation describing their class projects. My TWC 522 students, who were online students, recorded a PowerPoint presentation with audio voiceovers–it went very smoothly. Select students of my TWC 422 course presented on the ideas the class came up with to help the City of Glendale reach specific, hard-to-reach demographic audiences. I also contributed to the poster presenting, as you can see in the above photo.
I enjoyed working with Project Cities and the City of Glendale very much. The students worked with a real client (which I always find to be of value to them), and I was able to couch what the students were learning throughout the course in the context of the two projects. This method of teaching was particularly helpful in the 422 undergraduate course. I look forward to working with Project Cities again in the future!
Beyond the completion of the Project Cities work, I also had work some research successes: I am happy to note that I received a C.R. Anderson Research Grant, had an article accepted for publication in May at Business and Professional Communication Quarterly, had a book chapter accepted for a book that should be out in December 2019, and presented at two conferences. (I also received two article rejections and an extensive Revise and Resubmit; nothing is ever quite an unbroken string of success.)
Finally, I became a member of the graduate faculty of the doctoral program in the Hugh Downs School of Human Communication in Spring and began to mentor a doctoral student.
As for the groundwork, I have many articles and projects in various stages of completion; I look forward to completing some of them and sending them out for publication this year. I am particularly excited about finishing the work that the C.R. Anderson Grant funded surrounding a corpus analysis of Kickstarter campaigns.
Here’s to 2019! May your endeavors be moved speedily through the various channels we all navigate and emerge at the Elysian Fields of publication mostly unscathed.
I’m really thrilled to note that my first article has a full citation now:
Carradini, S. (2018). “An organizational structure of indie rock musicians as displayed by Facebook usage,” Journal of Technical Writing and Communication,48(2), 151–174. doi:10.1177/0047281616667677.
It will soon be joined by an article in press about teaching entrepreneurship communication to artists that is in press! I’m also very excited to note that the Association for Business Communication has been kind enough to give me a grant to work on a big data analysis of Kickstarter writing; I’ll be putting out at least two publications from that in the next year. The first place you can see some of that analysis in action is at the Association for Business Communication conference in Miami; I’ll be headed down there in October to give a presentation on multimodal writing in Kickstarter campaigns. The goal of all that work is to figure out how best to convince people to give you money–it seems like a fairly evergreen concern.
I’m also working on several collaborative projects that are making their way through the process; I’ll be co-presenting a poster at ABC on one of those. Matt Baker and Matt Sharp will be co-presenting with me on data we gathered from a survey on ABC membership about the locations and content of their graduate education. It’s been a lot of fun to work with them and our new research assistant Elise Davidson on this project, and I look forward to presenting on it at ABC.
In 2017 I undertook a personal quest to read about the history of the Internet. For various reasons, I had gone through my whole doctoral program without actually studying the history of the Internet. I decided to recreationally fix this hole in my personal knowledge. Here’s a run-down in the order I read these 12 books; the order is a not-so-perfect chronological rendering of the history of computing. I should note that I used Walter Isaacson’s The Innovators as the Keeper of the Timeline; I would read a chapter until I hit a reference to a book or topic. Then I would pick up the book specifically about that topic, read it, then return to the Isaacson and repeat the process. This worked great.
There are a lot of other books about the history of the Internet; some are probably more canonical or “better.” These were the ones I chose based on previous recommendations and (gulp) Amazon/Goodreads reviews.
[Doesn’t count for this history but it did happen: I read Neal Stephenson’s Cryptonomicon first, which has Alan Turing and early computing machines in it. It’s Peak Neal Stephenson, which means I thought it was great but probably don’t want to recommend it to the uninitiated.]
The Idea Factory: Bell Labs and the Great Age of American Innovation by John Gertner. Basically a prehistory of the Internet, this book chronicles the people behind Bell Labs’ astonishing array of inventions in their most fruitful period (1917-1982): the transistor being paramount, with the discovery of silicon as a main component coming in a close second, followed by UNIX, fiber optic cable, communication theory, lasers, communications satellites, microwave communications, solar batteries, and light-sensitive electronic sensors in no particular order. The book is highly well-written and thoroughly enjoyable.
Where Wizards Stay Up Late: The Origins of the Internet by Katie Hafner and Matthew Lyon. This, in addition to having the best title of any book I read last year, is a strong history of ARPANET. Done through interviews with people who were there, the book contains a detailed story of how ARPA came to be in the late ’50s, through its work of setting up an internet, all the way until ARPANET’s demise in 1989. I thoroughly enjoyed this book.
Dealers of Lightning: Xerox Parc and the Dawn of the Computer Age by Michael A. Hiltzik. This book is three things: one long experience of schadenfraude, a chronicle of intensely dysfunctional office politics, and a fascinating look at how Xerox invented so much stuff. The first two topics are far less interesting than the third.
Inventing the Internet by Janet Abbate. This book is the best overall view of how the Internet was born that I read. It’s well-researched, insightful, and spends the right amount of time on each subject. The only downside is that the introduction is not as compelling as the rest of the very compelling book.
Hackers by Steven Levy. The first half of this book is a giddy romp through the other history of computing: MIT nerds, basement homekit tinkerers, hippies, and other misfits who created large swaths of computing as we know it. The second half gets into industry stuff (Atari, Apple, Microsoft) and is less interesting by dint of other people covering the same topics. The exception is the chapter on early home computer video game companies (Broderbund! Sierra Online!), which is so weird and zany it seems more like fiction than fact. It’s like the end of the zany documentary The King of Kong;both are so surreal and over-the-top that they both have to be true.
Weaving the Web by Tim Berners-Lee. The first 3/4ths are a hilariously detailed, blow-by-blow account of setting up the World Wide Web. The last fourth is a bit of a manifesto/future prediction that draws heavily on his hope for humanity to become united with the web as a catalyst for that. (He is probably sad these days.)
A Brief History of the Future: The Origins of the Internet by John Naughton. Written from a UK perspective, this is a charming, breezy, lighthearted take on the history of the Internet. I enjoyed it quite a bit. Didn’t add too much to the literature, but it would be great for someone who had no idea what an internet was–as this was published in 1999, that makes sense.
The Innovators by Walter Isaacson. This is a fantastic book about the history of computing if you have time for only one. Again, adds little to the literature, but Isaacson seems to have read all the literature and synthesized it. If you like this book, don’t tell any of your historian friends that you do. (You may get fire breathed at you.) Oddly, the conclusion does not really follow from the rest of the book.
Tubes: Behind the Scenes at the Internet by Andrew Blum. This one’s my favorite book I’ve read in this whole quest to read about the Internet. This is really a travelogue. I picked it up while in Ireland for the Association for Business Communication conference, so it hit me at the right time to resonate. The author travels all over the world looking for the places where the physical wires, cables, servers, and tubes of the Internet are housed. It sounds boring, but trust me: he’s such an incredible writer that it was one of the most fun books I’ve read all year.
The Mythical Man-Month by Frederick P. Brooks: What is conceptual is still relevant to people trying to lead teams (not just computer programming teams; he mentions other types of organization in passing). What is historical is relevant to me because of my interest in the history of computing. Both conceptual and historical elements are very droll and funny. A+, would read again.
Thanks to Chris Krycho, who helped me shape my thoughts on these books as I was reading.