Related research

From CogInfoCom
Jump to: navigation, search

Synergies Shaping CogInfoCom

In this section, several key points of synergy are discussed from the perspective of existing research fields relevant to the merging process between humans and ICT. It is important to emphasize that while all of these fields have their own motivations and unique set of methodologies, they also incorporate some aspect, or some future potential that makes them relevant to the use and support of cognitive capabilities in infocommunications.

One important notion that sets apart the goals of CogInfoCom from any of the research fields discussed in this section is that it aims to reach an understanding of how extended periods of co-evolution can trigger novel functionalities. This aspect of long-term co-evolution is rarely acknowledged, not to mention planned for, in other fields. Nevertheless, it goes without saying that a host of challenges, both in terms of analysis and design, can be better addressed if past results from synergically related fields are considered - especially when it comes to addressing goal-oriented episodic interactions as opposed to long-term co-evolution. Further details can be found in the various journal special issues which have appeared on CogInfoCom-related subjects, as well as in the upcoming book on CogInfoCom (to appear later 2015) [1].

Top of the page

Affective computing

Affective computing is a research field proposed by R. Picard at MIT in the 1990s that focuses on “computing that relates to, arises from, or influences emotions” [2]. While computation in general is often regarded as a subject area that should ideally be devoid of emotion, mounting evidence from various human-oriented sciences has brought about the realization that all high-level cognition – including perception, reasoning and decision making – is intimately linked with emotion. This view is supported not only through anatomical findings, but also through investigations of how reasoning and decision making is affected in subjects with physical lesions and / or emotional disorders.

As a result, research on affective computing has focused both on introducing aspects of emotionally influenced reasoning into computing, as well as on the perception, understanding and emulation of human emotions.With respect to this latter goal of emulation, Picard formulated four key components of emotion to be taken into consideration [3]:

  • Emotional appearance: behavioral manifestations give the appearance that a system has emotions;
  • Multiple levels of emotion generation: depending on e.g. the specific roles of emotions, or the availability of computational resources, different mechanisms can be used to generate emotions;
  • Emotional experience: a system is capable of attributing semantic labels to its own emotions (and of experiencing subjective feeling / intuition about them, although given limitations in our current understanding of the famous ‘hard problem’ of consciousness, the fulfillment of these latter criteria cannot be prognosticated);
  • Mind-body interactions: signaling and regulatory mechanisms are provided by emotions which create links between cognitive and other ‘bodily’ activities.

Based on the above, the field of affective computing is multi-faceted and is under continuous development. From a CogInfoCom perspective, certain aspects of the field are more relevant than others. Specifically, results of the field can become highly relevant when they are applied to the modulation of information in infocommunications settings with the purpose of strengthening human-ICT co-evolution from an emotional perspective. At the higher-level scale of collective social interactions, understanding, reasoning about and influencing, through infocommunications, the emotions of a group of people in a city or a region would be a possible CogInfoCom-oriented extension to the field.


Top of the page

Augmented Cognition

Augmented cognition (AugCog) is a research field that was proposed as part of a DARPA program by D. Schmorrow and his colleagues in the early 2000’s (St. John et al 2004; [4]; Schmorrow et al 2006; Stanney et al 2009). AugCog aims to “transform human-computer interactions” by “making information systems adapt to the changing capabilities and limitations of the user” (St. John et al 2004). The first international conference on the field was held in 2005.

AugCog primarily addresses cognitive aspects such as attention, memory, cognitive biases and learning capabilities using “cognitive state gauges” based on psychophysiological and neurophysiological measures derived from sources such as EEG, pupil dilation, mouse pressure, heart rate and many others (St. John et al 2004; Stanney et al 2009). By creating a closed loop system between the user and the device, measurements on cognitive state can be directly incorporated into both short-term control and long-term adaptation strategies, allowing for the compensation of cognitive limitations (Fuchs et al 2007; Hale et al 2008).

AugCog deals with specific types of systems in which human users communicate with information systems in a closed loop. The goal of AugCog is to allow information systems to be "calibrated and tailored to meet the varying human information processing strengths and weaknesses of any individual", mainly based on neuroscientific, psychological and ergonomic considerations. Thus, augmented cognition can be seen as a research area that provides ways to feed augmented information to the human neurobiological system (as opposed to body area networks, which focus on the extraction of information from the human body). If this kind of augmented information transfer occurs specifically between infocommunication systems and humans, then augmented cognition has much to offer for CogInfoCom.

Based on the above, AugCog can be seen as a research area that provides ways to tighten the coupling between users and systems by improving sensing capabilities of cognitive states and manipulating operation and feedback information in appropriate ways. The field shows strong parallels with human-computer interaction – as highlighted in its definition quoted above – but it can also be seen as providing a complementary perspective in the sense that its name speaks about the augmentation of (human) cognition as opposed to the augmentation of artificial capabilities (i.e. to render devices more suited to seamless interaction). When ideas and approaches from AugCog are applied to the modulation of functionality and information flow in infocommunication systems, the links between the field and CogInfoCom are clear. Especially interesting applications are possible when results in AugCog are applied to the sensing of cognitive states in scenarios with multiple participants and multiple devices, as suggested, for example, in (Skinner et al 2013). Such applications are eventually expected to lead to new kinds of ‘augmented sensors’ that are capable of understanding distributed phenomena based on multimodal activities in ICT networks.

Top of the page

Body Area Networks

Body area networks (BANs), or body sensor networks (BSNs) are specialized applications in which a network of sensors either attached to or implanted in the human body communicate physiological information to a central location for further processing and analysis. The predecessors of BANs were referred to as Personal Area Networks (PANs), which were first developed in the second half of the 1990s as a network of electronic devices on or near the human body, exchanging information via small currents passed through the body [5][6]. The term body sensor network was coined in the 21st century as the concept of PAN evolved towards wearable and implantable sensors used for health monitoring related purposes. Today, BANs/BSNs are seen as involving all kinds of “monitoring of physical, physiological and biochemical parameters without activity restriction and behavior modification” for applications supporting healthcare, sports and general wellbeing [7].

Although technologies relevant to BANs still focus primarily on healthcare applications, their long-term use can be expected to reach a broader scope of domains. In particular, from a CogInfoCom perspective, the long-term collection of physiological data coupled with machine learning techniques can lead to a kind of cyberization of the human body that in turn may be useful for the design of increasingly “contextually aware” and “physiologically-augmented” infocommunication technologies. Such possibilities for cyberization once again highlight the fact that the term ‘cognitive’ is increasingly applicable to artificial besides biological systems as the ICT network is increasingly characterized by contextually aware components capable of sensing, representing and interpreting sensory patterns from their environment. In the long run, it is conceivable that research on BANs may inspire the creation of body cyberizations belonging to multiple users as a new kind of ‘multi-BAN’ architecture.

Top of the page


Future Internet

Two major directions in Future Internet research are the Internet of Things and 3D Internet. The Internet of Things (IoT) focuses on the integration and virtualization of the physical-biological world (hence both physical objects and humans) together with their capabilities into a comprehensive network through billions of interconnected sensor technologies. This means that rather seeing IoT as a network of objects, it can be regarded as humans and objects that matter to them. In a sense, IoT creates a strong physical analogy to the human nervous system: just as the latter provides humans with a ‘sensor network’, IoT implements a global, physical sensor network (examples of this analogy abound in the literature, through less direct terms such as the ‘central nervous system’ – as in the case of the “Central Nervous System of the Earth” project carried out by HP – as well as in direct architectural considerations (Ning and Wang 2011)). Inasmuch as Internet of Things enhances the capabilities of humans for effective communication, it is expected to provide key insights into the field of CogInfoCom as well. The possibility of treating the human ‘sensory system’ and the global sensory system implemented by IoT in a unified framework is also strongly relevant to CogInfoCom.

Although today the engineering and cognitive science based perspectives through which the two areas are addressed are still markedly distinct, it is conceivable that similar terminologies and methodologies for investigation will be developed in the not too distant future. An overview of architectural designs and applications in IoT can be found in (Uckelmann et al 2011), while (Perera et al 2014) provides an in-depth survey from an application-oriented point of view.

The concept of 3D Internet (3DI), which is a more recent development, focuses on the growing expectation of users for “high-quality 3D imagery and immersive online experience” (Alpcan et al 2007; [8]). 3D Internet is seen as a natural part of the Future Internet concept, because with the appearance of virtualized interfaces to everyday objects, users will still expect to be able to handle them in the same ways (importantly, in relation to the same functionalities) as they do in the physical world. However, once this is established, it also becomes clear that 3DI can also be useful for handling 3D representations of content that is not in its natural form amenable to direct physical representation. Further, once modifiable representations of both (virtualized) physical objects and abstract data are created, the ability to navigate (i.e. move along trajectories and remember those trajectories) between objects and data sets in a way that seems natural also becomes important. Taken together, these persectives clearly show that 3D Internet is about much more than visualization and interaction in 3D: it is a mapping between the digital world and (physical) 3D metaphors based on highly evolved human capabilities for representation, manipulation and navigation in 3D. Any technology that achieves this is an important contribution to the field. As a case in point, spatial memory systems – proposed by Professor Niitsuma (Niitsuma and Hashimoto 2009; Niitsuma et al 2007) – which map physical locations and gestures to digital content and automated functionality are strongly relevant to 3D Internet.

Both the Internet of Things and the 3D Internet are expected to pervade our everyday lives in the near future. A consequence of both of these research directions is that users are expected to be able to communicate with both physical and virtual ‘things’ (i.e. everyday objects and objectified abstractions) through the Internet, and also to collaborate with them in ways that depend on both the (artificially cognitive) capabilities of the objects and on the context (i.e., users will need access to different components of the network depending on situational context). These criteria introduce a host of challenges. First, there is the question of augmented collaboration (i.e., the objects, as viewed from the Internet, can be a combination of physical and virtual components). Second, there is the question of scalability (i.e., due to the rapid expansion of content space as an increasing number of objects try to communicate with the user in the Internet of Things). In these regards, CogInfoCom has common interests with Future Internet in terms of selecting the information types that are relevant to the given context, and the modes of communication that are necessitated by those information types.

Top of the page

Human-Computer Interaction and Multimodal Interaction

Human-computer interactions (HCI) is a highly influential multidisciplinary field that focuses on the psychology of interactions between users and computer systems and aims to formulate design principles that are guaranteed, at least in some context, to lead to improved usability [9]. Although the results produced by the field will be foundational as long as humans use computers, the proliferation of information systems providing users with less and less direct, or simply different possibilities for interaction has led to the emergence of fields such as human-machine interaction, human-robot interaction, human-information interaction, and even human-Internet interaction and human ubiquitous computing interaction.

As remarked by Streitz and Nixon, we have to ask whether we are still “interested in interacting with computers”, rather than “interacting with information and collaborating with people” due to the fact that computers are disappearing from our lives in two senses of the word: in a physical and a mental sense [10]. Physical disappearance refers to changing modes of interaction, while mental disappearance reflects the fact that even if computers are increasingly important in the background, users are also gradually becoming less aware of their existence as their interfaces blend into the everyday physical surroundings. However, this train of thought can be continued further: it may be added that not only the computer, but in many cases the user as an individual entity is also disappearing. On the one hand, the word ‘user’ suggests that we are using something, whereas the long-term co-evolution targeted by CogInfoCom research well extends into the territory of functionally agnostic interaction patterns. On the other hand, there is often value in considering cognitive entities at larger spatio-temporal scales than those characterizing single-user interactions: such entities are by definition heterogeneous and often lack the kinds of clearly delineated interfaces – both internally and externally – which originally gave rise to words such as ‘user’ and ‘interaction’.

Regardless, in cases where individual users are being targeted by new infocommunication technologies, results in HCI are strongly relevant. Given that infocommunications is directed primarily towards the sharing of knowledge through ICT, the following dimensions of interaction – which have been extensively studied in the past – are particularly important to consider:

  • Negative effects of reduced resolution. There are studies which show that it is better to use different modalities than the ones normally used for a given task when the resolution of data flow is reduced through the normal modality.
  • Cross-effects between sensory modalities. Researchers have long ago discovered that the impression that different sensory modalities are independent of each other is more illusory than real. The question as to whether multi-sensory feedback is productive or not has much to do with the redundancy in the information that is presented. However, it is also often possible for one sensory modality to yield realistic sensations normally perceived through another modality, while another sensory modality gives no contribution to realistic sensations, but significantly increases the user's sense of telepresence [11].
  • Sensory dominance. Another key point of interest is how various sensory modalities relate to one another in terms of importance to human cognition. This is referred to as the question of sensory dominance. There are a number of studies which show that vision dominates haptic touch and audition, but it was also shown that relationships of dominance can become more complex if more than two modalities are under stimulation at the same time. [12]
Top of the page

Sensory Substitution

Sensory substitution is a form of sensor-bridging CogInfoCom. The basic idea behind sensory substitution, and its utility was first described by Bach-y-Rita and his colleagues, who, in one of their more recent works, define sensory substitution as "the use of one human sense to receive information normally received by another sense". [13]

The need to broaden the scope of sensory substitution, at least in engineering systems, was also recognized by Bach-y-Rita and his colleagues. This was eloquently highlighted as follows: [13]

"However, in the context of mediated reality systems, which may incorporate multiple modalities of both sensing and display, the use of one sense [...] to display information normally acquired via another human sense [...] or alternatively via a ’non-natural’ sense such as sonar ranging, could be considered to be a form of sensory augmentation (i.e., addition of information to an existing sensory channel). [...] We therefore suggest that, at least in multimodality systems, new nomenclature may be needed to independently specify (a) the source of the information (type of environmental sensor, or virtual model); (b) the type of human information display (visual, auditory, tactual, etc.); and finally (c) the role of the information (substitutive or augmentative), all of which may play a role in reality mediation."

The definition of CogInfoCom addresses many of the propositions made by Bach-y-Rita. More importantly, Bach-y-Rita clearly recognizes that although sensory substitution is sufficient in describing many applications, it could be valuable to broaden the scope of sensory substitution so that it can be used to describe many forms of communication between humans and machines, even if the source or destination of the communication cannot be described using the traditional senses of the human nervous system.

Top of the page

Virtual Reality

In everyday human-machine interaction, both the human and the machine are located in the same physical space and thus the human can rely on his / her natural cognitive system to communicate with the machine. In contrast, when 3D virtualization comes into play, the human is no longer in direct connection with the machine; instead, he / she must communicate with its virtual representation. [14] Thus, the problem of human-machine interaction is transformed into a problem of human-virtual machine interaction. In such scenarios, the natural communication capabilities of the human become limited due to the restricted interface provided by the virtual representation of the machine (for instance, while the senses of vision and audition still receive considerable amount of information, the tactile and olfactory senses are almost completely restricted in virtual environments, i.e. it is usually not possible to touch or smell the virtual representation of a machine). For this reason, it becomes necessary to develop a virtual cognitive system which can extend the natural one so as to allow the human to effectively communicate with the virtual representation, which essentially becomes an infocommunication system in this extended scenario. One of the primary goals of cognitive infocommunications is to address this problem so that sensory modalities other than the ones normally used in virtual reality research can be employed for the perception of a large variety of information on the virtualized system.

After dealing with VR for a while, the user can get the feeling that he/she is a ghost. Thus, the user can see and hear everything in the virtual reality, but he/she cannot touch and manipulate objects the same way as can be done in the real, physical world. For this reason, it becomes important for CogInfoCom to create a body for this ghost, floating in VR. This can perhaps be achieved if higher level communication is possible -- i.e., between the cognitive capabilities of the user's cognitive system and the infocommunication device that is used to interface the virtual reality.

Top of the page

Research Areas and Initiatives Under CogInfoCom

The CogInfoCom conference series has given rise to a number of new research areas and initiatives. This section provides a brief overview. Further details can be found in the various journal special issues which have appeared on CogInfoCom-related subjects, as well as in the upcoming book on CogInfoCom (to appear later 2015) [1].

CogInfoCom channels

Interaction and communication between cognitive entities can occur at numerous levels depending on the semantics and pragmatics of the situation. Finding the interaction mode, or language that is most suitable for a given application is an important challenge, especially if the information type to be conveyed is such that the cognitive entity has no past experience in interpreting it through any form of sensory pattern. As the expansion of cognitive content space progresses, this is often precisely the case. As a result, the question of how sensory representations can be created anew to characterize novel concepts is expected to gain increasing relevance.

The framework of CogInfoCom channels addresses the above challenges by combining both structural and semantic elements to define sets of sensory messages with graded meanings of high-level concepts2 (Csapo and Baranyi 2012b). From a syntactical point of view, icon-like and message-like design elements in interfaces developed for the auditory, haptic and other modalities (e.g., auditory icons and earcons (Blattner et al 1989), haptic icons and hapticons (Maclean and Enriquez 2003; Enriquez and MacLean 2003), olfactory icons and smicons (Kaye 2004)) are generalized into a layered framework in which messages are built up of lower-level icons (Csapo and Baranyi 2012d). While icons support direct meaning, messages are generally multi-dimensional – both from a perceptual and from a conceptual point of view – and are often characterized by the sequential occurrence of iconic elements. The CogInfoCom channel framework includes a concept algebra-based toolset for the mapping of semantic meaning to messages, as well as a parametric cognitive artifact, referred to as the spiral discovery method (SDM), which allows users to fine-tune the parametric mapping between generating parameters (used to generate messages) and semantic gradations (Csapo and Baranyi 2012c). More recently, the framework has been extended to include concepts adapted from biology, with the goal of modeling the evolution of communication as it occurs in natural settings. If communication is seen as an adaptive process that evolves through time to convey meaning about emergent concepts, the biological phenomenon of ritualization, by which implicit cues evolve into purposeful signals can provide important ideas for further development (Scott-Phillips et al 2012). Accordingly, signal ritualization has been extended in the CogInfoCom channel framework with channel differentiation, through which individual messages can evolve into sets of graded messages with variations that depend on the contextual background (Csapo and Baranyi 2013).

Speechability

There is strong empirical support for the view that humans evaluate interactions with ICT and human-human social interactions based on analogous criteria (Nass and Yen 2010). As a result, the various aspects of how humans communicate in everyday social interactions cannot be neglected when augmenting the social capabilities of cognitive networks.

The modality of speech is a central component of social interactions in everyday life. For several decades before the emergence of CogInfoCom, researchers have aspired not only to transmit speech between geographically distant locations, but also to enable artificially cognitive systems to understand and communicate through verbal utterances in the most natural (i.e., “human”) way possible. The reductionist approach applied to this latter problem has quickly resulted in the appearance of increasingly specialized sub-branches focusing on a wide range of verbal phenomena. This fragmentation of a research paradigm that was once fueled by a single goal is not surprising, given that speech itself cannot be fully characterized by any single dimension. Rather, it involves co-dependent interactions among such aspects as word usage, prosody, facial expressions, hand gestures, body postures and movements, as well as higher-level constraints on the dynamics of dialogue imposed by social norms and cultural specificities. A further source of heterogeneity is brought about by the fact that the modality of speech is used for more than a single purpose: its goal is not merely to support the sharing of information, but also to help create trust and more generally strengthen those kinds of social / behavioral patterns that are conducive to the maintenance of society. Removing any of these aspects from human speech, as is clear from extensive research on virtual communication agents, renders interaction unnatural and cumbersome.

Speechability aims to reverse the tendency towards fragmentation described above by attempting “to link cognitive linguistics with verbal and non-verbal social communicative signals” through human-ICT tangleactions (Campbell 2012; Benus et al 2014b). The latter qualification refers to the emergent effects of long-term co evolution in terms of a broadening range of natural and artificial modalities applied together towards long-term goals, and conseqently the increasing availability of raw data that is collected for e.g. machine learning or other post-processing purposes. Such long-term entanglement will lead to capabilities whose scope extends far beyond the generation and understanding of speech, so as to encompass application areas such as speech rehabilitation, tutoring for the learning of foreign languages, or communicational training.

Approaches applied in speechability research are also rooted in the observation that speech is an embodied phenomenon, i.e. it is interpreted through reference to physical interaction in specific social contexts. Thus, a unified approach is suggested that acknowledges the fact that humans and machines have different embodiments – albeit ones that are converging as novel cognitive entities. If this point of view is adopted, then even without long-term tangleactions, speech-related capabilities and phenomena can be mapped onto artificial ICT capabilities in ways that suit underlying differences in embodiment rather than denying them. In this way, human capabilities are supported rather than copied, and the criterion for technology to be human-like becomes relatively less important. This explains the term ‘speechability’, as distinct from ‘speech ability’ (i.e., the ability to speak as humans do), which reflects the complex, embodied nature of speech phenomena in cognitive entities. Crucially, while speechability includes speech ability (including artificial capabilities for speech generation and recognition), it also encompasses a broader range of tanglefaced applications.

Socio-cognitive ICT

Computer networks of all kinds have emergent cognitive properties due to the constraints imposed on their operation through various user interactions. Today’s Internet is no exception to this rule. However, as the content space that is handled through the Internet is augmented with new cognitive data and information types, a growing set of functions and value chains are conceivable.

Many recently developed applications can be seen as directed towards augmenting the social capabilities of cognitive network based on the analysis, manipulation and management of information flow. For example, content and metadata-based analysis of user activity is used to gain a better understanding of spatially distributed, crowd-generated phenomena – including social-political tendencies, the spread of ideas and epidemics, etc. Similarly, high-level structural and organizational management of cognitive networks is applied to the optimization of information flow in critical situations, such as in workflow management and disaster recovery. The effective treatment of such critical situations necessitates a hierarchical allocation of both human and technological resources that is crucially enforced by technology, in much the same way as face-to-face human interaction patterns are governed and to some extent enforced by social conventions.

Applications such as these can be categorized as operating at the highest level of entanglement between humans and ICT, or among heterogeneous cognitive entities in more complex cases. In this case, collective behaviors are used to support either individual user interactions, or the prediction / analysis of collective behaviors and events. Due to the clear relevance of both social and cognitive capabilities to such applications, the term ‘socio-cognitive ICT’ – proposed by Professor Hassan Charaf and his research group at the Budapest University of Technology and Economics – has been increasingly used to describe them (Szegletes et al 2014; Fekete and Csorba 2014; Kovesdan et al 2014). Interestingly, this choice of description is not without parallels from the past, even from a technological perspective. Hemingway and Gough described design challenges relevant to ICT as a conflict between different “goals and underlying values” in the fields of software engineering, human-computer interaction and information systems – the latter of which is “generally concerend with the longer-term impacts of information and communication systems on organizations and society (Hemingway and Gough 1998). In this interpretation, the term ‘socio-cognitive’ is used to describe all aspects encompassing the ways in which social phenomena affect, and the ways in which they are affected by ICT infrastructure. Sharples et al. describe socio-cognitive engineering as a general methodology that “aims to analyze the complex interactions between people and computer-based technology and then transform this analysis into usable, useful and elegant socio-technical systems” involving technology and social context. In a way somewhat similar to earlier works, the paper also views this approach as an integration of fields such as software, task, knowledge and organizational engineering (Sharples et al 2002). In general, the important difference between such earlier uses of the term ‘socio-cognitive’ and its use in ‘socio cognitive ICT’ is that socio-cognitive ICT focuses on all aspects of network management and networked experience as opposed to the design-centric perspective adopted in earlier works.

Ethologically inspired CogInfoCom (EtoCom)

There are strong parallels between the evolution of biological systems and the evolution of ICT. In much the same way that biological systems evolve, the development of technologies and devices in ICT can also be characterized by evolution: a process that unfolds incrementally, both in terms of form factor and functionality, based on after the-fact selection mechanisms imposed by technological and market trends. If this is accepted as a starting point, the question arises whether an evolutionary process that targets behavior can also be interpreted, or developed for ICT applications.

Ethology-based CogInfoCom (EtoCom) is a research direction initiated by Professor Adam Miklosi and his research group that aims to describe, interpret and design such a kind of behavioral evolution (Lakatos and Miklosi 2012; Szabo et al 2012). Besides considering the evolution of form factors and functional capabilities in ICT devices from an ethology-based perspective, EtoCom aims to do the same with respect to mutual behaviors directed towards long-term co-evolution. For example, it is well-known to ethologists that the communication between humans and dogs has been mutually satisfying for thousands of years, even though dogs and their owners are by no means co-equal and the patterns of interaction between them are motivated by concepts such as attachment and separation anxiety instead of specific, goal-oriented considerations (Lakatos and Miklosi 2012; Miklosi and Soproni 2006; Viranyi et al 2004; Miklosi et al 2003; Topal et al 1998). In this context, an important research question is whether the concepts of attachment and separation anxiety are suitable for use in engineering design; in other words, would it serve as a benefit to human-ICT co-evolution if ICT devices were capable of displaying signs of attachment or separation anxiety?

Based on this question, it is clear that EtoCom has strong links to and significant overlaps with affective computing; however, rather than considering a primarily emotion-based perspective, it focuses on a broader scope of evolutionary behaviors with an outlook towards long-term entangled interactions. The abstraction of cognitive capability which characterizes CogInfoCom research is also present in EtoCom in a strongly embodied sense, allowing for any kind of heterogeneous cognitive entity to evolve long-term behaviors.

CogInfoCom in industrial robotics and production management

As pointed out by several authors, CogInfoCom research has relevance to applications in industrial robotics and industrial production by supporting novel types of inter-cognitive communication among designers, engineers, management and industrial machines. In many cases, this communication occurs through ‘brain-in-the-loop’ type interactions, in which the role of the human operator is to make small, yet indispensible and thus extremely important contributions to otherwise essentially automated processes at just the right times. Such brain-in-the-loop communication is increasingly important not only in the operation, but also in the design of complex industrial systems. The overarching point is that there are certain decisions that can only be made by the human operator, designer, engineer or stakeholder – and even if the time required to make these decisions is 2% of the complete process (with 98% of the process being automated), their functional impact is crucial. It is important for industrial technologies to be able to merge the intelligence and high-level decision capabilities of the human brain with such processes in the most convenient, effective and ergonomically tractable way. At the same time, the creation of new cognitive entities through the human actors involved in the processes together with the ICT network surrounding them is equally important. Only by addressing both of these challenges can the progress of long-term industrial goals can be most effectively supported.

One of the key research areas within this domain focuses on flexible robot programming and control. Several authors have noted that most industrial systems require some degree of reconfiguration, and that this can pose challenges especially for small and medium-sized enterprises with interest in small series production. To address these challenges, several approaches for using CogInfoCom channels in human-robot communication, as well as for programming industrial robots based on heterogeneous input sources (including CAD models and human motions captured through 3D sensor systems) have been proposed (Aryania et al 2012; Solvang and Sziebig 2012; Thomessen and Kosicki 2011; Thomessen and Niitsuma 2013; Pieska et al 2014). A further area of interest is the augmentation of human and / or institutional capabilities for proactive innovation. Several important ideas were introduced and summarized in recent works by Pieska, Ericson and others (Pieska et al 2014; Ericson et al 2014).

Mathability

Mathability was initiated by Professor Attila Gilanyi, and defined at the CogInfoCom 2013 conference as a research direction that investigates “artificial and natural [as well as combined] cognitive capabilities relevant to mathematics” ranging from “low-level arithmetic operations to high-level symbolic reasoning” (Baranyi and Gilanyi 2013; Borus and Gilanyi 2013; Torok et al 2013). Importantly, in much the same way that the focus of speechability extends further than speech capabilities, mathability focuses not only on human and artificial mathematical capabilities, but also on the mathematical capabilities of humans and ICT together with the heterogeneous cognitive entities they give rise to. Thus, one of the key questions behind mathability is whether mathematical capability can be understood – and abstractions of it created – so as to facilitate the design and strengthening of mathematical capabilities in emergent cognitive entities. An important motivation behind mathability lies in the observation that in the past decades, even the notion of what qualifies as a ‘proper’ solution to a mathematical problem has changed. While a few decades ago, only analytically closed formulae would have been accepted, today it is not uncommon for so-called granular (i.e. numerical, sub-symbolic) formulations to be seen as equally useful and acceptable. Although initially contested by many, by today this tendency is seen as natural as increasingly complex problems need to be addressed in both the engineering and social sciences. However, despite these changes, the human capacity to think in numerical terms remains limited, and analytically tractable deductive methodologies are preferred. The challenge, then, is how to bridge between these two worlds of analytical and numerical approaches in a way that is suitable for the problems at hand to be tackled together by cognitive entities (Baranyi and Gilanyi 2013; Torok et al 2013). Ideally, ICT devices to be able to guide users through solutions to mathematical problems, letting users know how they are able to help along the way. If such processes were possible, humans involved would be able to make analytical decisions on what deduction route to further pursue whenever necessary, while the ICT components involved would be focused on applying the numerical tools that are most suited to the given context.

Another aspect relevant to mathability – and also specifically to the motivation considered just now – is the question of how mathematical capabilities (now treated in an abstract sense that applies not only to humans) can be graded. To consider a simple example: if someone learns that a schoolchild finishing the 5th grade received a C in mathematics, he or she will pretty much understand the level at which the child can be spoken to about mathematics. Importantly, this level is not (only) defined by any particular subject matter, but much more so by the capacity of the child to understand certain concepts, and his or her the lack of capacity to understand others. The question is whether such a grading system could be created for ICT devices, and ultimately, to emergent cogntive entities as well in a way that is useful in practical scenarios. Based on such a grading system, it would be possible to understand how complex tangleactions could be directed at solving real-world problems using mathematically sound approaches.

References

  1. 1.0 1.1 http://www.springer.com/gp/book/9783319196077
  2. R. Picard, Affective Computing. The MIT Press, 1997.
  3. R. Picard. What does it mean for a computer to “have” emotions. Emotions in humans and artifacts pp 213–235, 2003
  4. D. Schmorrow (editor), "Foundations of Augmented Cognition", Lawrence Erlbaum Associates, 2005.
  5. T. Zimmerman Personal Area Networks: Near-field Intrabody Communication. IBM Systems Journal 35(3):609–617, 1996
  6. T. Zimmerman Wireless networked digital devices: A new paradigm for computing and communication. IBM Systems Journal 38(4):566–574, 1999
  7. Body sensor networks, Second Edition. Springer, 2014
  8. P. Kapahnke, P. Liedtke, S. Nesbigall, S. Warwas, and M. Klusch, "ISReal: An Open Platform for Semantic-Based 3D Simulations in the 3D Internet", Lecture Notes in Computer Science, Volume 6497, pp. 161-176, Springer, 2010
  9. SK Card, TP Moran, A Newell. The psychology of human-computer interaction. Lawrence Erlbaum Associates, 1986
  10. N Streitz, P Nixon. The disappearing computer. Communications of the ACM 48(3):32–35, 2005
  11. F. Biocca and J. Kim and Y. Choi, "Visual Touch in Virtual Environments: an Exploratory Study of Presence, Multimodal Interfaces and Cross-Modal Sensory Illusions", Presence: Teleoperators and Virtual Environments, 10(3), pp. 247-265, 2001
  12. D. Hecht and M. Reiner, "Sensory Dominance in Combinations of Audio, Visual and Haptic Stimuli", Experimental Brain Research, 193, pp. 307-314, 2009
  13. 13.0 13.1 P. Bach-y Rita, M.E. Tyler, and K.A. Kaczmarek. Seeing with the brain. International Journal of Human-Computer Interaction, 15(2):285–295, 2003.
  14. G. Riva and F. Davide, “Communications through virtual technologies,” in IDENTITY, COMMUNITY AND TECHNOLOGY IN THE COMMUNICATION AGE, IOS PRESS. Springer-Verlag, 2001, pp. 124–154.
Top of the page

License

The text of this website [or page, if you are specifically releasing one section] is available for modification and reuse under the terms of the Creative Commons Attribution-Sharealike 3.0 Unported License and the GNU Free Documentation License (unversioned, with no invariant sections, front-cover texts, or back-cover texts).

Top of the page