Epistemic Amalgams: On Language, Theory And Expertise In Algorithmic Social Sorting
Algorithms are fast replacing traditional social sorting mechanisms in creating, recreating and reifying social identities. But while such mechanisms have traditionally relied on different theories to supply the basic discursive building blocks for identity construction, algorithmic classification often lacks a theoretical or lingual base, and is accordingly seen as a post-hegemonic or post-textual way of government. However, algorithms are human creations, and are the result of constant interactions between human actors and computer code. Therefore, theory, language and expertise still play a role in the creation and implementation of such algorithms. But what role do they play? What theories take part in the algorithmic construction of identities? Are traditional theories like psychology or sociology still needed for sorting people, or are they replaced by math and computer engineering? Moreover, what role does human language play in algorithmic sorting? And, if language is involved, does the use of lingual categories shed light onto algorithmic black boxes, or is it merely another measure of obfuscation? Relying on an ethnographic study of the Israeli data analytics' scene, this paper offers a closer look at the epistemic amalgam of algorithmic profiling, and at the changing role of expert knowledge, theory and language in the algorithmic construction of identities. Furthermore, focusing on the often neglected ties between language, regionality and algorithms, Israeli start-ups are shown to use language-agnostic algorithms in an attempt to "code against culture", break out of their relative peripherality and expand to new, previously inaccessible, markets.