CogInfoCom - Cognitive Infocommunications

From CogInfoCom
Jump to: navigation, search

Motivation behind CogInfoCom

Cognitive infocommunications (CogInfoCom) is an interdisciplinary research field that has emerged as a synergy between infocommunications and the cognitive sciences. One of the key observations behind CogInfoCom is that through a convergence process between these fields, humans and ICT are becoming entangled at various levels, as a result of which new forms of cognitive capability are appearing. Crucially, these capabilities are neither purely natural (i.e., human), nor purely artificial; therefore, it is suggested that they should be treated in a way that unifies both engineering and human-oriented perspectives.

Key features of CogInfoCom include:

  • Focus on cognitive capability: rather than merely focusing on ways in which humans, devices and ICT infrastructures interact, the field acknowledges the notion of ‘cognitive capability’ – an abstraction which allows for the introduction of temporal and contextual considerations into the analysis and design of relevant phenomena;
  • ... from a generic perspective: rather than restricting the notion of cognitive capability to humans alone, the field adopts the point of view that such capabilities are emergent properties of any continued interaction and communication that potentially involves both natural and artificial components and systems;
  • ... at various (particularly at large) temporal scales: rather than merely focusing on individual goal-oriented interactions at clearly specified points in time, the field adopts the point of view that the entanglement between humans and ICT is resulting in the necessity to consider their co-evolution at larger scales of time;
  • ... in the context of emergent functionality: rather than focusing exclusively on function-oriented interactions, the field also considers cases where functionalities

developed at one time to fulfill a specific goal acquire new roles and uses – a phenomenon that is caused by constant change and growth in complexity of the elements (and relationships thereof) defining human-ICT networks.

A detailed account of the background behind CogInfoCom can be found in various papers as well as in an upcoming book (to appear later 2015).[1][2][3].

Top of the page

Definition of CogInfoCom

The first draft definition of CogInfoCom was given in [4]. The definition was finalized based on the paper with the joint participation of the Startup Committee at the 1st International Workshop on Cognitive Infocommunications, held in Tokyo, Japan in 2010. A finalized version of the definition is reproduced here (from [1]):

"Cognitive infocommunications (CogInfoCom) investigates the link between the research areas of infocommunications and the cognitive sciences, as well as the various engineering applications which have emerged as the synergic combination of these sciences. The primary goal of CogInfoCom is to provide a systematic view of how cognitive processes can co-evolve with infocommunications devices so that the capabilities of the human brain may not only be extended through these devices, irrespective of geographical distance, but may also interact with the capabilities of any artificially cognitive system. This merging and extension of cognitive capabilities is targeted towards engineering applications in which artificial and/or natural cognitive systems are enabled to work together more effectively."

As discussed in the following section, the implicit and explicit assumptions underlying CogInfoCom together form a unique viewpoint. As a result, new notions and concepts capable of leading to new research directions are continuously emerging. In this section, two early concepts central to multi-sensory communication between various levels of cognitive capability are introduced: the mode of communication, and the type of communication. In the future, these concepts may be extended to provide a more detailed qualification (and in the long run: quantification) of cognitive capabilities independent of the exchange of communicational messages.

The mode of communication characterizes the relative cognitive capabilities of the actors involved in communication:

  • Intra-cognitive communication: information transfer information transfer occurs between two cognitive entities with equivalent cognitive capabilities (e.g., between two humans, or between two humans in the same social-technological environment – as determined by what is relevant to the application).
  • Inter-cognitive communication: information transfer occurs between two cognitive entities with different cognitive capabilities (e.g., between a human and an artificially cognitive system, or between two humans in a different social or technological environment – as determined by what is relevant to the application).

The type of communication refers to the type of information that is conveyed between the two communicating entities, and the way in which this is done:

  • Sensor-sharing communication: cognitive entities on both ends use the same sensory modality to receive information.
  • Sensor-bridging communication: sensory information is not only transmitted, but also transformed to a different, more appropriate sensory modality of the receiving cognitive entity.
  • Representation-sharing communication: the same information representation is used on both ends of communication.
  • Representation-bridging communication: sensory information is filtered and/or adapted so that a different information representation is used on the two ends of communication.

Remarks

  1. A sensor-sharing application brings novelty to traditional infocommunications in the sense that it can convey any kind of normally perceptible signal (i.e., a signal that could be perceived if there were no distance to communicate across) to the other end of the communication line. The key determinant of sensor-sharing communication is that the same sensory modality is used to perceive the information on the receiving end of communication as would be if there were no distance between the sending and receiving ends.
  2. Sensor bridging can reflect both the way in which information is conveyed (i.e., by changing sensory modality) as well as the novelty of the information type that is conveyed. Whenever the transmitted information type is imperceptible to the receiving entity due to a lack of appropriate sensory modality, communication will necessarily occur through sensor bridging.
  3. A CogInfoCom application can be regarded as an instance of representation sharing even if it bridges between different sensory modalities. By considering the general characteristics of a representation (e.g., its character-based, icon-based etc. nature) rather than the specific details of its physical or biological manifestation, it becomes possible to describe representations of different modalities in unified ways.
Top of the page

Convergence of Fields behind CogInfoCom

General perspective

It is a common phenomenon for newly established fields to go through a process of maturation and ultimate convergence. The evolution of informatics, media and communications is no different: although these fields initially had different goals and applied different methodologies, their maturation and growing pervasiveness has led to the emergence of new synergies. Thus the fields of infocommunications, media informatics and media communications appeared in the latter half of the 20th century. The subsequent evolution of these disciplines, in turn, has resulted in still newer patterns of convergence. As modern network services aim to provide an increasingly holistic user experience, the infocommunications sector now encompasses "all information processing and content management functions [...] of information technology and electronic media" [2].

Parallel to these developments, with the enormous growth in scope and technological relevance of the cognitive sciences, the new fields of cognitive media [5][6][7] [8], cognitive informatics [9][10] and cognitive communication(s) [11] [12];[13] are gradually emerging. By today, these fields have either fully made their way, or are steadily on their way into standard university curricula and will eventually become a natural part of collective awareness.

For example, a quick search reveals that several research groups and companies around the world have ‘cognitive media’ (sometimes together with the suffix ‘technologies’) in their name. While originally the field was strongly motivated by new prospects of virtually augmented, interactive education, today it is driven by a more general focus on how various forms of media can be analyzed in terms of their effects on human cognition, and how they can be applied to sharing information in ways that appeals to human cognitive capabilities. As a result, any research focused on interactive media, or interaction technologies in general will have strong relevance to the field. However, another factor of growing importance behind cognitive media is the growing prevalence of artificial sensory capabilities implemented in media: in a way analogous to the human nervous system, the Internet as an interconnection of globally distributed devices and nodes, together with the media applications based on it can be seen as an artificial nervous system and an artificial set of sensory modalities. For example, data available on social network platforms such as Facebook and Twitter are increasingly used to predict and understand physical, mental and social processes taking place in the world. According to this view, cognitive media targets not only the cognitive effects that media has on humans, but also the cognitive capabilities of media itself.

Similarly, the terms ‘cognitive informatics’ and ‘cognitive communication’ / ‘cognitive communications’ are omnipresent in delineations of research focus. A definition of cognitive informatics can be found on the official web page of a number of research organizations (e.g., the Pacific Northwest Laboratory funded by the U.S. Department of Energy). Today there are several research institutes dedicated to cognitive informatics, such as the Institute for Cognitive Informatics (ICON) in Sheffield, UK, which has an annual budget of over 1 million GBP. An IEEE International Conference on Cognitive Informatics has been held every year since 2002, and several symposia with leading researchers invited as keynote lecturers have been held in the past few years. Cognitive communication(s), when used without the trailing ‘s’, can refer to the study of ways in which humans anticipate context as a contributor to the choice of communication style, and perceive the consequences of communication choices (in short, it is the study of human mental models of communication). On the other hand, with the traling ‘s’, cognitive infocommunications is used to refer to devices and networks which can assign resources and functionalities in dynamic and intelligent ways (as in e.g., cognitive radio or cognitive networks). It is possible that in the future, these two directions will reach common formulations for certain problems if user needs and application scenarios are considered as contributing factors to the adaptivity of cognitive radio.


Top of the page

Infocommunications perspective

Based on the previous subsection, there has been constant convergence between infocommunications, media informatics and media communications in the past two decades. As a result, infocommunications today takes a broader focus than before (while some aspects, naturally, are no longer considered to be part of infocommunications).

The three fields of media, informatics and communications originally created separate theories, but are converging towards each other today. From a research historical point of view, CogInfoCom is situated in the region between cognitive informatics and cognitive communications.

An up-to-date and comprehensive outline of how infocommunications has evolved to become the science it is today can be found in [14]. The author demonstrates convincingly that the convergence leading to the infocommunications of today (i.e., without the participation of cognitive sciences) was thoroughly analyzed in the mid-1990's, and was soon recognized by both by the European Commission and by the International Telecommunication Union. The consensus is that this convergence took place at three levels, manifesting itself in the unification of technologies, the integration of markets and the harmonization of regulation. [2] Thus, today the same devices that are used to communicate with friends, family and colleagues, can also be used to access and actively process digital media content (hence, both the technology and the market is available for unification). Further, it is due to the harmonization of regulation that the cost of using modern infocommunications devices is transparent with respect to the kind of information that they are used to process and transmit.

The convergence process and its future prospects can be summarized in the following 4 steps[2]:

  • Traditional separation, internal digitization. The technology, market and regulation behind various content types (e.g., voice, text, audio-visual) are managed separately.
  • Unified telecommunications. A partial unification was possible from a technological point of view with the rapid development of digital technology. Hence, it became possible to handle different kinds of content with the same devices. On the other hand, the electronic media production industry had yet to become involved in the convergence process.
  • Infocommunications. The electronic media content producing industry, as well as the appropriate institutions for regulation joined the convergence process to produce the technological and social-economic revolution that is today's infocommunications industry.
  • Cognitive infocommunications. A natural fourth step is the integration of cognitive sciences into the convergence process behind infocommunications. On the one hand, this involves an expanding content space, in which novel types of information are gathered, inferred and reasoned upon in novel ways. The reason why infocommunications is essential to this development is that the content must also be presented to the human users, in a way that appeals to their unique cognitive capabilities.

As a result of the synergy between the cognitive sciences and infocommunications, the content space that is accessed and manipulated by both users and ICT is expected to grow in size and richness of interpretation. If the potential behind this expanding content space is to be harnessed, it can be expected that the respective unique capabilities of humans and ICT will be applied through new, long-term interaction patterns, leading to the emergence of new cognitive entities. A significant challenge in CogInfoCom is how to ‘power’ these cognitive entities with the kinds of information and functionality that are suitable to their roles and requirements.

Top of the page

New conceptual view behind CogInfoCom

As mentioned earlier, one of the key observations behind CogInfoCom is that there is a merging process between humans and ICT that is resulting in increasingly complex forms of human-ICT entanglement, and is at the same time creating the necessity for an analogous convergence between technology and the human-oriented cognitive sciences. The phenomena of merging and entanglement in the context of ICT are clear not only from everyday experience, but have also been remarked and analyzed to various degrees and in various contexts by many authors.

CogInfoCom proposes a new and unified conceptual approach in which the process of merging is derived from the theoretically unified concept of different levels of cognitive capabilities co-existing in the information space (irrespective of whether they are natural or artificial capabilities, and whether they are individual capabilities or capabilities which emerge from a cloud of artificial and/or biological components). This derivation extends to a variety of aspects, including:

  • Low-level direct entanglement including those that rely on invasive and non-invasive forms of interface (as in e.g. brain-computer interfaces). Entanglement at this low level allows for direct sensing and control, however, it is also relatively cumbersome in that it requires sensors to be implanted or worn and is also difficult to operate at conceptually higher levels of command.
  • Entity-level multisensory entanglement at the level of personal informatics devices, in which communication and interaction occur through (human – but crucially not only human) sensory modalities. The question of what kind of "communication language" to use (i.e. in terms of message encoding) depending on the semantics of the information, as well as – among others – the modality to be used, the application environment, and the user’s individual cognitive capabilities are strongly relevant to this level of entanglement. It is important to note that the challenge consists not only in providing effective and ergonomic interface design, but also in accomodating the transfer of an expanding set of semantic concepts – relevant at large temporal scales, for instance in co-existive smart home and other augmented virtual reality applications – through the limited possibilities afforded by human sensory modalities.
  • Collective entanglement: which occur at the collective level of multi-user interactions. Applications in this layer can be linked to collective behaviors in two ways: by making use of collective behaviors in order to support individual users’ interaction with a system; or alternatively, by supporting the prediction or a posteriori analysis of collective events based on an analysis of past behaviors (both individual and collective). Such applications often rely on the mining and analysis of vast amounts of heterogeneous data sources – including e.g. activities on social communication platforms.

From a CogInfoCom perspective, any kind of hardware or software component that actively collects / stores / understands / communicates data can be seen as a component with a set of cognitive capabilities. Whenever users become entangled with a system of such capabilities, the border between natural and artificial gradually becomes vague. In other words, it is often the case that there is often no longer any objective border between purely human and purely artificial cognitive capabilities. For example, in a scenario where a user controls an industrial robot with one hand using knowledge obtained from a smartphone held in her other hand, the question immediately arises: should this interaction be characteracterized from the perspective of communication between three different entities, or is there benefit in viewing the user and the smartphone as one entity that is communicating with the robot? The answer to this question is important, if only for the fact that both the robot and the supporting smartphone application might be designed differently if it is known in advance that they will used together in this specific scenario, or if the cognitive effects that the smartphone application will have on the user – such as limited dexterity and attention, increased capabilities for information access, etc. – are known in advance. To consider two another examples, the boundary between artificial and human capabilities would be equally blurred in a scenario where a user’s lower arm is augmented through a robot arm that is capable of downloading new ‘skills’ from an ICT network; or in a scenario where a pair of augmented glasses, or an augmented helmet is used to provide an industrial operator with real-time information feeds complementing the task at hand (such technologies are already present in industry, and are on the verge of commercial breakthrough).

The bottom line is not that one would be philosophically inclined to specify a boundary between entities, but that it is also necessary to specify such boundaries from the functional perspective of engineering design. On the one hand, in a domain where difficult problems of synthesis can be effectively tackled only by breaking them down into smaller components, glued together through some form of communication once they are complete, the functional boundaries at which this is done can make or break the tractability and sustainability of an implementation. On the other hand, once it is accepted that the boundaries between artificial and natural are not as clean as they were a few decades ago, unprecedented possibilities emerge for the development of new functionalities – even cognitive capabilities. Such capabilities can be seen as implemented in the dependencies between components in much the same way as lower-level functionalities are created as a result of several different components working appropriately in mutually specified contexts. This hierarchical dependence among capabilities can be seen as leading to a hierarchical organization of cognitive capabilities.

Top of the page

Research Background of CogInfoCom

Based on trends in EU and other government-supported research, the early forms of CogInfoCom arised through the course of domestic and international projects. The term cognitive infocommunication appeared as early as in 2005, and was mentioned in several papers by the end of the first decade of the 21st century [15] [16].

Although CogInfoCom itself is a newly established research discipline, its emergence is directly related to a past of several decades. During this time, both the cognitive sciences and infocommunications have seen considerable growth. As a result, it is now commonplace to talk about cognitive machines, cognitive robotics and cognitive informatics on the one hand, and about the infocommunications revolution and the convergence between informatics, media, infocommunications and regulation on the other. These trends are leading to the gradual appearance of artificial cognitive capabilities, i.e. capabilities that are directed towards a broadened scope of sensing and processing of unstructured data. Users are becoming accustomed to accessing these artificial cognitive capabilities through their infocommunications devices in a wide variety of contexts. However, more than just allowing users to access these capabilities, infocommunications devices of the future are also expected to allow users to extend their cognitive capabilities with artificial ones through extended periods of co-evolved interactions (such interactions can be referred to as 'tangleactions' due to their long-term, entangled nature). This synergy of natural and artificial cognitive capabilities will be applicable in flexible and novel ways, both in physical contexts and in virtual worlds.

In terms of motivation and possible application areas, CogInfoCom has common interests - from certain perspectives - with a number of other fields, as described here. However, in terms of studying long-term co-evolution at a higher abstraction level of cognitive capabilities, CogInfoCom has also motivated and given rise to a number of new research areas an initiatives, including CogInfoCom channels, speechability, socio-cognitive ICT, EtoCom, CogInfoCom-aided industrial engineering and mathability - as described here. Details on these fields can be found in contributions to the various CogInfoCom fora, as well as in an upcoming book (to appear later 2015 [3] ).

Top of the page

Examples

Examples in this section are divided into 4 categories based on various different combinations of modes and types of communication. The examples are also described in. [4] Further details on research areas and applications relevant to CogInfoCom can be found here.

Top of the page

Intra-cognitive sensor-sharing communication

An example of intra-cognitive sensor-sharing communication is when two humans communicate through Skype or some other telecommunication system, and a large variety of information types (e.g. metalinguistic information and background noises through sound, gesture-based metacommunication through video, etc.) are communicated to both ends of the line. In more futuristic applications, information from other sensory modalities (e.g. smells through the use of electronic noses and scent generators, tastes using equipment not yet available today) may also be communicated. Because the communicating actors are both human, the communication mode is intra-cognitive, and because the communicated information is shared using the same cognitive subsystems (i.e., the same sensory modalities) on both ends of the line, the type of communication is sensor-sharing. The communication of such information is significant not only because the users can feel physically closer to each other, but also because the sensory information obtained at each end of the line (i.e., information which describes not the actor, but the environment of the actor) is shared with the other end (in such cases, the communication is intra-cognitive, despite the fact that the transferred information describes the environment of the actor, because the environment is treated as a factor which has an implicit, but direct effect on the actor).

Top of the page

Intra-cognitive sensor-bridging communication

An example of intra-cognitive sensor-bridging communication is when two humans communicate through Skype or some other telecommunication system, and each actor’s pulse is transferred to the other actor using a visual representation consisting of a blinking red dot, or the breath rate of each actor is transferred to the other actor using a visual representation which consists of a discoloration of the screen. The frequency of the discoloration could symbolize the rate of breathing, and the extent of discoloration might symbolize the amount of air inhaled each time (similar ideas are investigated in, e.g. [17] [18]). Because the communcating actors are both human, the communication mode is intra-cognitive. Because the sensory modality used to perceive the information (visual system) is different from the modality used to normally perceive the information (it is questionable if such a modality even exists, because we don’t usually feel the pulse or breath rate of other people during normal conversation), we say that the communication is sensor-bridging. The communication of such parameters is significant in that they help further describe the psychological state of the actors. Due to the fact that such parameters are directly imperceptible even in face-to-face communication, the only possibility is to convey them through sensor bridging. In general, the transferred information is considered cognitive because the psychological state of the actors does not depend on this information in a definitve way, but when interpreted by a cognitive system such as a human actor, the information and its context together can help create a deeper understanding of the psychological state of the remote user.

Top of the page

Inter-cognitive sensor-sharing communication

An example of inter-cognitive sensor-sharing communication might include the transfer of the operating sound of a robot actor, as well as a variety of background scents (using electronic noses and scent generators) to a human actor controlling the robot from a remote teleoperation room. The operating sound of a robot actor can help the teleoperator gain a good sense of the amount of load the robot is dealing with, how much resistance it is encountering during its operation, etc. Further, the ability to perceive smells from the robot’s surroundings can augment the teleoperator’s perception of possible hazards in the robot’s environment. A further example of inter-cognitive sensor sharing would be the transfer of direct force feedback through e.g. a joystick. The communication in these examples is inter-cognitive because the robot’s cognitive system is significantly different from the human teleoperator’s cognitive system. Because the transferred information is conveyed directly to the same sensory modality, the communication is also sensor-sharing. Similar to the case of intracognitive sensor-sharing, the transfer of such information is significant because it helps further describe the environment in which the remote cognitive system is operating, which has an implicit effect on the remote cognitive system.

Top of the page

Inter-cognitive sensor-bridging communication

As the information systems, artificial cognitive systems and the virtual manifestations of these systems (which are gaining wide acceptance in today’s engineering systems) become increasingly sophisticated, the operation of these systems and the way in which they organize complex information are, by their nature, essentially inaccessible to the human perceptual system and the information representation it uses. For this reason, inter-cognitive sensor bridging is perhaps the most complex area of CogInfoCom.

A rudimentary example of inter-cognitive sensor-bridging communication that is already in wide use today is the collision-detection system available in many cars which plays a frequency modulated signal, the frequency of which depends on the distance between the car and the (otherwise completely visible) car behind it. In this case, auditory signals are used to convey spatial (visual) information. A further example could be the use of the vibrotactile system to provide force feedback through axial vibrations (this is a commonly adopted approach in various applications, from remote vehicle guidance to telesurgery). Force feedback through axial vibration is also very in widespread gaming, because with practice, the players can easily adapt to the signals and will really interpret them as if they corresponded to a real collision with an object or someone else’s body. It is important to note, however, that the use of vibrations is no longer limited to the transfer of information on collisions or other simple events, but can also be used to communicate more complex information, such as warning signals to alert the user’s attention to events whose occurrence is deduced from a combination of events with a more complex structure (e.g., vibrations of smartphones to alert the user of suspicious account activity, etc.). Such event-detection systems can be powerful when combined with the concepts of intelligent environments or smart homes.

Finally, more complex examples of sensor bridging in inter-cognitive communication might include the use of electrotactile arrays placed on the tongue to convey visual information received from a camera placed on the forehead (as in some early research on sensory substitution), or the transfer of a robot actor’s tactile percepts (as detected by e.g. a laser profilometer) using abstract sounds on the other end of the communication line. Relatively short audio signals (i.e., 2–3 seconds long) have also been used to convey abstract tactile dimensions such as the softness, roughness, stickiness and temperature of surfaces in virtual environments. [19]

The type of information conveyed through sensor bridging, the extent to which this information is abstract and the sensory modality to which it is conveyed is open to research. As researchers obtain closer estimates to the number of dimensions each sensory modality is sensitive to, and the resolution and speed of each modality’s information processing capabilities, research and development in sensory substitution will surely provide tremendous improvements to today’s engineering systems.

Top of the page

Complex example

Scenario for the complex example, in which two remote telesurgeons are communicating with each other and the telesurgical devices they are using to operate a patient.

Let us consider a scenario where a telesurgeon in location A is communicating with a telesurgical robot in remote location B, and another telesurgeon in remote location C. At the same time, let us imagine that the other telesurgeon (in location C) is communicating with a different telesurgical robot, also in remote location B (in much the same way as a surgical assistant would perform a different task on the same patient), and the first telesurgeon in location A. In this case, both teleoperators are involved in one channel of inter-cognitive and one channel of intra-cognitive communication. Within these two modes, examples of sensor sharing and sensor bridging might occur at the same time. Each telesurgeon may see a camera view of the robot they are controlling, feel the limits of their motions through direct force feedback, and hear the soft, but whining sound of the operating robot through direct sound transfer. These are all examples of sensor-sharing intercognitive communication. The transmission of the operated patient’s blood pressure and heart rate are also examples of sensor-sharing inter-cognitive communication (they are intercognitive, because the transmission of information is effected through the communication links with the robot, and they are sensor-sharing, because they are presented in the same graphical form in which blood pressure and heart rhythm information are normally displayed). At the same time, information from various sensors on the telesurgical robot might be transmitted to a different sensory modality of the teleoperator (e.g., information from moisture sensors using pressure applied to the arm, etc.), which would serve to augment the telesurgeon’s cognitive awareness of the remote environment, and can be considered as sensor-bridging communication resulting in an augmented form of telepresence. Through the intracognitive mode of communication, the two teleoperators can obtain information on each other’s psychological state and environment. Here we can also imagine both distance and sensor-bridging types of communication, all of which can directly or indirectly help raise each telesurgeon’s attention to possible problems or abnormalities the other telesurgeon is experiencing.

Top of the page

References

  1. 1.0 1.1 P. Baranyi and A. Csapo, "Definition and Synergies of Cognitive Infocommunications", Acta Polytechnica Hungarica, vol. 9, no. 1, pp. 67-83, 2012
  2. 2.0 2.1 2.2 2.3 G. Sallai, "The Cradle of Cognitive Infocommunications", Acta Polytechnica Hungarica, 9(1), pp. 171-181, 2012
  3. 3.0 3.1 http://www.springer.com/gp/book/9783319196077
  4. 4.0 4.1 P. Baranyi and A. Csapo, “Cognitive Infocommunications: CogInfoCom”, 11th IEEE International Symposium on Computational Intelligence and Informatics, Budapest, Hungary, 2010.
  5. T. Nannicelli, P. Taberham (Eds.) Cognitive Media Theory. Routledge New York, London, 2014
  6. B. Hokanson, S. Hooper, "Computers as Cognitive Media: Examining the Potential of Computers in Education", Computers in Human Behavior 16(5): 537-552, 2000
  7. M. M. Recker, A. Ram, T. Shikano, G. Li, and J. Stasko, “Cognitive media types for multimedia information access,” Journal of Educational Multimedia and Hypermedia, vol. 4, no. 2–3, pp. 183–210, 1995.
  8. R. Kozma, “Learning with media,” Review of Educational Research, vol. 61, no. 2, pp. 179–212, 1991.
  9. Y. Wang and W. Kinsner, "Recent Advances in Cognitive Informatics", IEEE Transactions on Systems, Man and Cybernetics, 32(2), pp. 121-123, 2006
  10. D. Vernon and G. Metta and G. Sandini, "A Survey of Artificial Cognitive Systems", IEEE Transactions on Evolutionary Computation, 11(2), pp. 151-179, 2007
  11. J. Roschelle, “Designing for cognitive communication: epistemic fidelity or mediating collaborative inquiry?” in Computers, communication and mental models. Taylor & Francis, 1996, pp. 15–27.
  12. D. Hewes, The Cognitive Bases of Interpersonal Communication. Routledge, 1995.
  13. J. Mitola, G. Maguire, "Making Software Radios more Personal", IEEE Personal Communications, 13-18, 1999
  14. G. Sallai, "Defining Infocommunications and Related Terms", Acta Polytechnica Hungarica, 9(6), pp. 5-15, 2012
  15. P. Baranyi, B. Solvang, H. Hashimoto, and P. Korondi, “3d internet for cognitive info–communication” in 10th International Symposium of Hungarian Researchers on Computational Intelligence and Informatics (CINTI ’09, Budapest), 2009, pp. 229–243.
  16. G. Soros, B. Resko, B. Solvang and P. Baranyi, "A Cognitive Robot Supervision System", SAMI 2009, pp. 51-55
  17. C. Sommerer and L. Mignonneau, “Mobile feelings - wireless communication of heartbeat and breath for mobile art,” in 14th International Conference on Artificial Reality and Teleexistence, Seoul, South Korea (ICAT ’04), 2004, pp. 346–349.
  18. L. Mignonneau and C. Sommerer, “Designing emotional, metaphoric, natural and intuitive interfaces for interactive art, edutainment and mobile communications,” Computers & Graphics, vol. 29, no. 6, pp. 837 – 851, 2005.
  19. A. Csapo and P. Baranyi, "A Conceptual Framework for the Design of Audio Based Cognitive Infocommunication Channels", in Recent Advances in Intelligent Engineering Systems, ser. Studies in Computational Intelligence, Springer-Verlag, 368, pp. 261-281, 2012
Top of the page

License

The text of this website [or page, if you are specifically releasing one section] is available for modification and reuse under the terms of the Creative Commons Attribution-Sharealike 3.0 Unported License and the GNU Free Documentation License (unversioned, with no invariant sections, front-cover texts, or back-cover texts).

Top of the page