Rethinking AI

Rethinking AI

Veranstalter
Ramón Reichert, Universität Wien, Mathias Fuchs, Leuphana Universität, Lüneburg
Veranstaltungsort
Ort
Wien / Lünburg / Maastricht / Köln
Land
Austria
Vom - Bis
01.06.2017 - 10.07.2017
Deadline
10.07.2017
Website
Von
Ramón Reichert

Rethinking AI. Neural Networks, Biopolitics and the New Artificial Intelligence
Ramón Reichert, Mathias Fuchs (eds.)

The meaning of AI has undergone drastic changes during the last 60 years of AI discourse(s). What we talk about when saying “AI” is not what it meant in 1958, when John McCarthy, Marvin Minsky and their colleagues started using the term. Take game design as an example: When the Unreal game engine introduced "AI" in 1999, they were mainly talking about pathfinding. For Epic Megagames, the producers of Unreal, an AI was just a bot or monster whose pathfinding capabilities had been programmed in a few lines of code to escape an enemy. This is not "intelligence" in the Minskyan understanding of the word (and even less what Alan Turing had in mind when he designed the Turing test). There are also attempts to differentiate between AI, classical AI and "Computational Intelligence" (Al-Jobouri 2017). The latter is labelled CI and is used to describe processes such as player affective modelling, co-evolution, automatically generated procedural environments, etc.
Artificial intelligence research has been commonly conceptualised as an attempt to reduce the complexity of human thinking. (cf. Varela 1988: 359-75) The idea was to map the human brain onto a machine for symbol manipulation – the computer. (Minsky 1952; Simon 1996; Hayles 1999) Already in the early days of what we now call “AI research” McCulloch and Pitts commented on human intelligence and proposed in 1943 that the networking of neurons could be used for pattern recognition purposes (McCulloch/Pitts 1943). Trying to implement cerebral processes on digital computers was the method of choice for the pioneers of artificial intelligence research.
The “New AI” is no longer concerned with the needs to observe the congruencies or limitations of being compatible with the biological nature of human intelligence: “Old AI crucially depended on the functionalist assumption that intelligent systems, brains or computers, carry out some Turing-equivalent serial symbol processing, and that the symbols processed are a representation of the field of action of that system.” (Pickering 1993, 126) The ecological approach of the New AI has its greatest impact by showing how it is possible “to learn to recognize objects and events without having any formal representation of them stored within the system.” (ibid, 127) The New Artificial Intelligence movement has abandoned the cognitivist perspective and now instead relies on the premise that intelligent behaviour should be analysed using synthetically produced equipment and control architectures (cf. Munakata 2008).
Kate Crawford (Microsoft Research) has recently warned against the impact that current AI research might have, in a noteworthy lecture titled: AI and the Rise of Fascism. Crawford analysed the risks and potential of AI research and asked for a critical approach in regard to new forms of data-driven governmentality:

“Just as we are reaching a crucial inflection point in the deployment of AI into everyday life, we are seeing the rise of white nationalism and right-wing authoritarianism in Europe, the US and beyond. How do we protect our communities – and particularly already vulnerable and marginalized groups – from the potential uses of these systems for surveillance, harassment, detainment or deportation?” (Crawford 2017)

Following Crawford’s critical assessment, this issue of the Digital Culture & Society journal deals with the impact of AI in knowledge areas such as computational technology, social sciences, philosophy, game studies and the humanities in general. Subdisciplines of traditional computer sciences, in particular Artificial Intelligence, Neuroinformatics, Evolutionary Computation, Robotics and Computer Vision once more gain attention. Biological information processing is firmly embedded in commercial applications like the intelligent personal Google Assistant, Facebook’s facial recognition algorithm, Deep Face, Amazon’s device Alexa or Apple’s software feature Siri (a speech interpretation and recognition interface) to mention just a few. In 2016 Google, Facebook, Amazon, IBM and Microsoft founded what they call a Partnership on AI. (Hern 2016) This indicates a move from academic research institutions to company research clusters. We are in this context interested in receiving contributions on the aspects of the history of institutional and private research in AI. We would like to invite articles that observe the history of the notion of “artificial intelligence” and articles that point out how specific academic and commercial fields (e.g. game design, aviation industry, transport industry etc.) interpret and use the notion of AI.
Against this background, the special issue Rethinking AI will explore and reflect the hype of neuroinformatics in AI discourses and the potential and limits of critique in the age of computational intelligence. (Johnston 2008; Hayles 2014, 199-210) We are inviting contributions that deal with the history, theory and the aesthetics of contemporary neuroscience and the recent trends of artificial intelligence. (cf. Halpern 2014, 62ff) Digital societies increasingly depend on smart learning environments that are technologically inscribed. We ask for the role and value of open processes in learning environments and we welcome contributions that acknowledge the regime of production as promoted by recent developments in AI. We particularly welcome contributions that are historical and comparative or critically reflective about the biological impact on social processes, individual behaviour and technical infrastructure in a post-digital and post-human environment? What are the social, cultural and ethical issues, when artificial neuronal networks take hold in digital cultures? What is the impact on digital culture and society, when multi-agent systems are equipped with license to act?

Submissions might cover the following topics or extend beyond that:
A historical perspective of object/pattern recognition/identification/detection and AI
Artificial intelligence recognition algorithms
Computer vision
Deep learning
Device ecology
Digital education governance
Epistemology of learning in artificial neural networks
Evolutionary computation
Fuzzy systems and neural networks
Games and virtual worlds
Genetic algorithms
Human enhancement and transhumanism
Media archaeology
Philosophical Posthumanism
Philosophy of robotics
Prognostics and predictive modelling
Science history of neural nets and deep learning
Socio-cultural Posthumanism

Paper proposals may relate to, but are not limited to, the following questions concerning the new artificial intelligence paradigm. (cf. Pfeifer/Scheier 1999; Munakata 2008) Interdisciplinary contributions, such as those from science and technology studies or the digital humanities, are particularly encouraged. When submitting an abstract, authors should make explicit to which of the following categories they would like to submit their paper:

1. Field Research and Case Studies (full paper: 6000-8000 words)
We invite articles that discuss empirical findings from studies that approach the relationships between neurobiology, brain research, computational intelligence, biopolitics, psychological research and the new AI movement. These may include practices of circulating or collecting data as well processes of production and evaluation.
2. Methodological Reflection (full paper: 6000-8000 words)
We invite contributions that reflect on the methodologies employed when researching the practices of the new tendencies of AI (e.g. artificial neural networks, fuzzy systems, genetic algorithms, evolutionary computation, deep learning, prognostics and predictive modelling, computer vision). These may include, for example, the specificities of ethnographic fieldwork in online/offline environments; challenges and opportunities faced when qualitatively researching quantifiable data and vice versa; approaches using mixed methods; discussions of mobile and circulative methods; and reflections of experimental forms of research.
3. Conceptual/Theoretical Reflection (full paper: 6000-8000 words)
We encourage contributions that reflect on the conceptual and/or theoretical dimension of the new artificial intelligence paradigm, and discuss or question how the digital intelligence can be defined, what it can describe, and how it can be differentiated.
4. Entering the Field (2000-3000 words; experimental formats welcome)
This experimental section presents initial and ongoing empirical work in digital media studies. The editors have created this section to provide a platform for researchers who would like to initiate a discussion concerning their emerging (yet perhaps incomplete) research material and plans as well as methodological insights.

Deadlines and contact information
- Expressions of interest/Initial abstracts (max. 300 words) and short biographical note (max. 100 words) are due on: July 10, 2017.
- Authors will be notified by August 01, 2017, whether they are invited to submit a full paper.
- Full papers are due on: October 01, 2017.
- Notifications to authors of referee decisions: December 20, 2017
- Final versions due: February 01, 2018
- Please send your abstract and short biographical note to Ramón Reichert ramon.reichert@univie.ac.at and Mathias Fuchs mathias.fuchs@leuphana.de.

Literature:
Al-Jobouri, Laith. 2017. CEEC 2017,2017 9th Computer Science and Electronic Engineering Conference, https://www.aconf.org/conf_104832.html
McCulloch, Warren and Walter Pitts. 1943. A Logical Calculus of the Ideas Immanent in Nervous Activity. In: Bulletin of Mathematical Biophysics, Vol. 5, pp. 115–133.
Barad, Karen. 2007. Meeting the Universe Halfway. Quantum Physics and the Entanglement of Matter and Meaning. Durham/London: Duke University Press.
Halpern, Orit. 2014. Beautiful Data: A History of Vision and Reason since 1945. (Experimental Futures.) Durham, N.C.: Duke University Press.
Hauptmann, Deborah and Warren Neidich. 2010. Cognitive Architecture. From Bio-politics to Noo-politics. Rotterdam: 010 Publishers.
Hayles, Katherine N. 2014. Cognition Everywhere: The Rise of the Cognitive Nonconscious and the Costs of Consciousness. In: New Literary History 45 (2): pp. 199-220.
__________ 1999. How We Became Posthuman. Virtual Bodies in Cybernetics, Literature, and Informatics, Chicago/London.
Hern, Alex. 2016/09/28. In: The Guardian, online: "Partnership on AI' formed by Google, Facebook, Amazon, IBM and Microsoft | Technology".
Johnston, John. 2008. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI. Cambridge, MA and London, UK: The MIT Press.
Keedwell, Edward and Ajit Narayanan. 2005. Intelligent Bioinformatics: The Application
of Artificial Intelligence Techniques to Bioinformatic Problems. Chichester, UK: John
Wiley and Sons, Ltd.
Manning, Christopher D. 2015. Computational Linguistics and Deep Learning Computational Linguistics, 41(4), pp. 701-707.
Molten, Megan. 2017. Artificial Intelligence Is Learning to Predict and Prevent Suicide. In: Wired, online: https://www.wired.com/2017/03/artificial-intelligence-learning-predict-prevent-suicide/
Munakata, Toshinori. 2008. Fundamentals of the New Artificial Intelligence. Neural, Evolutionary, Fuzzy and More, New York: Springer.
Minsky, Marvin. 1952. A neural-analogue calculator based upon a probability model of reinforcement. Harvard University Pychological Laboratories internal report.
Neidich, Warren. 2014. "The Architectonics of the Mind’s Eye in the Age of Cognitive Capitalism." Brain Theory. Palgrave Macmillan UK, pp. 264-286.
Parker, J. R. 2011. Algorithms for Image Processing and Computer Vision. Wiley.
Pfeifer, Rolf and Scheier, Christian. 1999. Understanding Intelligence. Cambridge, MA: The MIT Press.
Pickering, John. 1993. “The New Artificial Intelligence and Biological Plausibility”, in: Stavros Valenti/John Pittenger (eds.), Studies in Perception and Action II, London/New York: Psychology Press, pp. 126-129.
Shaviro, Steven. 2014. The Universe of Things: On Speculative Realism. University of Minnesota Press.
Simon, Herbert. 1996. The Sciences of the Artificial. Cambridge, MA: The MIT Press.
Stewart, Patrick. 2014. “Introduction to Methodological Issues in Biopolitics”, in: Politics and the Life Sciences: The State of the Discipline. Emerald Group Publishing Limited, pp. 67-99.
Varela, Francisco J. et. al. 1988. Cognitive networks: Immune, neural, and otherwise, in: Alan Perelson (ed.), Theoretical Immunology, Redwood City 1988, pp. 359-75.
Wenger, Etienne. 2014. Artificial intelligence and tutoring systems: computational and cognitive approaches to the communication of knowledge. Morgan Kaufmann.
Widrow, B., & Lehr, M. 1990. 30 years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE, 78(9), pp. 1415-1442

About the Journal:
Digital Culture & Society seeks contributions that display a clear, inspiring engagement with media theory and/or methodological issues. Emphasising the relevance of new practices and technology appropriation for theory as well as methodology debates, the journal also encourages empirical investigations.
For more info please visit: http://digicults.org/

Programm

Kontakt

Ramón Reichert

Universität Wien

ramon.reichert@univie.ac.at


Redaktion
Veröffentlicht am
Klassifikation
Weitere Informationen
Land Veranstaltung
Sprach(en) der Veranstaltung
Englisch
Sprache der Ankündigung