Ernesto William De Luca




| ITC-IRST | EURAC | OVGU-Magdeburg | TU-Berlin | FHP | GEI | All |

All


  • Project: Children and their World ( 2015 - Current )

    A group of researchers from the Georg Eckert Institute (history), the University of Hildesheim Foundation (information sciences) and the German Institute for International Educational Research in cooperation with the TU Darmstadt (computational linguistics, software development) constituted itself in May 2014. The researchers are to be joined by project partners from the library of the University of Braunschweig, the University of Zürich, the Göttingen Centre for Digital Humanities and the Bavarian State Library. The aim of the project is to forge new, transdisciplinary paths in research on children’s images of their world in the period under investigation, routes to knowledge which may transcend the limits of the established qualitative methods currently in use and could therefore have the potential to shift the boundaries of historical research in this area. Funded through the Leibniz Competition, the project seeks to harness the tools previously employed successfully in other areas of the digital humanities, such as topic detection and opinion mining, for the use of historians working on the nineteenth and twentieth centuries. We will aim, to this end, to reconstruct intertextual connections, identify and highlight thematic clusters, and cast light on semantic fields, for the purpose of both generating quantitative findings and interpreting them as we place them in their historical context. Further, we expect the project generates hypotheses which historians will be able to examine and refine using hermeneutic exploration of selected works. Specifically, we will analyse the frequencies with which particular words occur, the use of grammatical forms, the appearance of semantic fields and the ways in which historical topics and figures are positioned in these texts. We are committed to enabling our project to bear fruit in the long term, and so will be aiming, along with presenting the approaches we develop to the research community, to make our findings available in digital form, on a permanent basis wherever possible..
  • Project: CLARIN-D Modern History ( 2015 - Current )

    Discipline-Specific Working Groups are central constituents of the CLARIN network (Common Language Resources and Technology Infrastructure), which is working across Europe to develop research infrastructure for language resources for the humanities and social sciences; the German branch of the project is funded by the German Federal Ministry of Education and Research. The CLARIN Working Group on Modern History, based at the Georg Eckert Institute, was launched in September 2014. The Discipline-Specific Working Groups represent key points of intersection and interaction between CLARIN-D and research communities in specific disciplines; they act as advisory and support services for the incorporation and curation of priority research data and tools. The Georg Eckert Institute is involved in a curation project pertaining to its digital textbook holdings since february 2015. The historians who make up the Working Group on Modern History are dedicated to exploring the question of which resources should be made available in digital form for research in their discipline, which standards should be employed in this undertaking, and which digital methods and tools might benefit historians’ work with these resources and open up new research pathways. Headed by Prof. Simone Lässig and Ernesto W. De Luca, the Working Group is further investigating the issue of the extent to which the upcoming field of digital humanities is giving rise to new research questions and methodological approaches. The group currently consists of members of the Göttingen Centre for Digital Humanities, Humboldt University Berlin, the Institute for the History of the German Jews, the Herzog August Library in Wolfenbüttel, the Max Weber Foundation, the Institute of Contemporary History in Munich, the German Historical Institutes in Rome and London, the German Institute for International Educational Research and the Georg Eckert Institute. Curation projects devised and supervised by the Working Group will engage with existing services provided by the CLARIN-D service centres, evaluate them and drive their ongoing development in line with historians’ specific academic needs. In this way, the Working Group will help ensure that the research questions and interests of historical research, specifically historical research into education, are adequately reflected in European research infrastructures, thereby enabling these infrastructures to continue improving..
  • Project: WorldViews. The World in Textbooks ( 2015 - Current )

    Since February 2015, the departments Europe and DIFI have been developing a research infrastructure that will integrate data storage for digital projects at the Georg Eckert Institute. This will dramatically improve interoperability and re-usability, rendering it easier to evaluate research results using tools and methods from digital humanities. Moving beyond the dimension of content, WorldViews is of fundamental importance to the further development of our institute’s infrastructure. Much of the GEI’s research infrastructure and many transfer projects are currently being standardised and integrated. WorldViews works towards high standards and aims to render the data from GEI projects usable in the long term – long after a project’s duration. By drawing on structures already in place, such as those developed in the course of the large-scale digital humanities projects, we are searching for ways in which to overcome technological isolated applications and to secure interoperability. Semantic methods play just as important a part here as long-term availability and the conversion of metadata. We are working with standardised data with a view to using the data within the semantic web..
  • Project: Edumeres ( 2015 - Current )

    Edumeres provides a central access point to the Georg Eckert Institute’s research-oriented information and communication infrastructure. Straddling national and disciplinary boundaries the portal brings together a multifaceted range of tools and materials through the access it provides to academic works and research and makes them available to the widely dispersed community of textbook and educational media researchers and practitioners..
  • Project: International TextbookCat ( 2015 - Current )

    The TextbookCat research instrument provides a welcome extension to the library OPAC system and is a search tool that dramatically improves the search possibilities within the textbook collection. The difficulties encountered when conducting textbook research using conventional search methods were a key consideration in the design process, as short textbook titles such as “Terra” produce little in the way of meaningful search results. This desideratum is one frequently broached by researchers and has been taken as the basis for a corresponding search tool which is tailor-made for the specific requirements of textbook searches. The TextbookCat employs an internal classification system in order to categorise textbooks according to applicable country, education level and subject. Additional categories of federal state and school type are provided for German textbooks and international textbooks can be filtered according to language. Any search results containing material that is not available online, will include details regarding whether it is available for loan. The International TextbookCat pilot project further developed the idea of the TextbookCat and augmented the textbook collection with the inventories of international partners. The project is currently focussing on combining the textbook databases of three institutions: the Georg Eckert Institute, the University of Turin and the National Distance Education University in Spain, in order to create a joint reference tool. Workflows and system architecture are being developed that in the long-term will enable further institutions to participate with relatively little effort on their part..
  • Project: GEI-Digital ( 2015 - Current )

    In 2009 the GEI began to digitalise its historical textbook collection. This long-term project aims to build up and develop a virtual library of German historical textbooks on selected subjects. As far as possible, all teaching materials available in German libraries dating from the 17th century through to the end of the National Socialist era, which are often difficult to access otherwise, are being virtually consolidated, both in form and content. These sources will thus be rendered available, barrier free, to a wide circle of researchers..
  • Project: Semantic Clustering ( 2013 - Current )



    Since 2013 I cooperate with the Landau Media company (http://www.landaumedia.de), one of Germany's leading media monitoring and analysis company. Since in times of social media, the number of sources and information is constantly increasing, a review of the relevant information is always time-consuming. In the project "Semantic Clustering for Trend Detection and Sentiment Analysis" we develop a framework to recognize new topics and cluster similar news articles for social media and traditional online media. News articles are processed in order to disambiguate the main concepts and entities contained to simplify the screening of the media response to each topic. The system is trained to automatically recognize if a new article corresponds to a known event, or constitutes a novel event (Topic Detection and Tracking). The articles are grouped into events using a hierarchical agglomerative clustering approach. We also implement a classification approach to categorize news articles into given classes. All the results will be then displayed in an online portal. .
  • Project: SemRes ( 2012 - Current )



    In conservation the documentation is an integral part of the restorers’ work to describe their restored objects. It is important and indispensable to understand the phases of conservation and to allow decision-making for conservation and conservation procedures. The information they contain about the conducted conservation procedures, as well as the methods and the materials used are an immeasurable store of knowledge for this discipline and build the basis of the knowledge used for the conservation of a given object. Documentation is very time consuming; conservators have to categorize photos, give detailed description of the procedures and justify decisions about materials. Every conservator delivers his/her own documentation to the respective monument administrations that archive it in their shelves, without any possibility to access them digitally. In this case, the access to already conducted procedures, as well as the provision of the knowledge about the methods and materials used are very difficult, often almost impossible. At the moment, it is not possible to link different documentations that contain information about similar objects, methods or materials used in the same way. These properties are not recognizable, because of this missing link between them. In the project SemRes (Semantic Documentation for Conservators) I cooperate since 2012 with the River Byte company (http://www.riverbyte.de) in order to develop a system that can help the conservators in structuring their available documentations accessing them in a semantic way .
  • Project: SPIGA ( 2009-2012 )



    The goal of the project SPIGA (Language-independent personalized information provision with global orientation) is the development an information service that automatically creates document collections for semantically described topics. Such a service can be used in products such as personalized newsletters or welcome pages, or in press reviews. The idea of this project is to exploit both the semantics and multilinguality of the news articles. Working within the news domain has the advantage that often the same news event is covered in different languages, such that there are multiple text variants of the same information. The core approach focuses on the extraction of knowledge - or rather semantic concepts - from news, in order to link the news documents. Each news event is thus represented by its concepts, and can hence be managed in a language-independent fashion. The research focus lies on the development and evaluation of methods for the Disambiguation of Named Entities. The representation of news documents as a set of concepts and concept relations then allows to search and group documents on the basis of these concepts, e.g. to create a press review.
  • Project: SERUM ( 2009-2012 )



    SERUM (Semantic Recommendations based on Large Unstructured Datasets) establishes the basis for a semantic recommender system that calculates high quality recommendations based on a semantic analysis of user behavior and news articles. The aim of the project is to develop a recommender system that computes recommendations independently from a specific use case or domain. The recommendations are personalized and adapted to the specific needs of a user based on personal interests and preferences. Within the SERUM research project, the goal is to recommend news articles based on the previous reading behavior of a user. This reading behavior is analyzed to create a personalized news digest for each user. The recommendation system is connected to a semantic knowledge base, which is modeled and managed as an ontology. The semantic knowledge is being linked with information from current news articles. Based on this semantic network, new algorithms were developed that analyze the semantic information and the user behavior to compute high quality recommendations.
  • Project: KMulE ( 2009-2012 )



    Context-aware recommender systems are becoming a popular topic, yet there are still many untouched aspects. In the project KMulE (Context-based Multimedia Recommender System), we study context identification and the concepts involved in hybrid and context-aware systems. The goal of the project is to implement a conceptual architecture for a context-aware recommender system for movies and TV shows. The system consists of a number of modules for context identification and recommendation. Key contextual features are identified and used for the creation of several sets of recommendations, based on the predicted context. The resulting prototype system will be evaluated and incorporated into the recommendation engine of movie and TV recommendation website Moviepilot.
  • Project: PIA ( 2009-2012 )



    The goal of the Personal Information Assistant (PIA) project is to provide a comprehensive agent-based solution for the personalized and device-independent supply of information. The user receives information that is relevant to his personal needs and interests. This includes daily news, background knowledge on work issues, or information on leisure time plans and activities. Besides a typical web search engine interface, the PIA system allows users to define and save searches which are then continually monitored by search agents for any new developments. The architecture of the PIA system is designed to allow information sources to be flexibly integrated into the system. Information is analyzed and filtered using advanced filtering methods, e.g. content-based or collaborative filtering techniques. The use of multiple filtering techniques, which are guided by integrating user feedback from a learning and user modelling component, guarantees a high accuracy of search results.
  • Project: SPREE ( 2009-2012 )



    The main focus of the "SPREE - A Community-Based Information Exchange Network" project is the implementation of an online portal for an efficient knowledge transfer between its users. Therefore the platform has to be capable of identifying the best qualified users for answering a given query in real time, and the quality of the algorithm doing the matching between query and experts will be of central importance. Moreover the portal will provide means - such as chat - allowing users and experts to directly communicate with each other. The SPREE project provides an online portal offering the functionality of automatically identifying the best qualified users (experts) for a given query. Furthermore the system will offer means such as chat for a direct communication between user and expert in order to maximize the result quality.
  • Project: RDF-OWL EuroWordNet Representation ( 2007 - 2008 )



    After a deep analysis of the different problems related to lexical resources and in particular to WordNet and its variations, I decided to convert EuroWordNet, the much richer multilingual WordNet variant, into an RDF/OWL representation. The decision of converting EuroWordNet was based on the need of extending it (because not all meanings are covered) with other resources. These resources are domain-specific and could enhance the coverage of meanings used for the semantic classification of the documents (as I discussed in my doctoral thesis). The novelty of this approach is to extend the monolingual WordNet RDF/OWL representation for a multilingual purpose, but also for a possible domain-specific extension, because most domain-specific ontologies are written in OWL. Together with Aldo Gangemi (the Laboratory of applied ontologies in Rom, Italy), I am working on the possibility to integrate my multilingual RDF/OWL EuroWordNet representation into the standardized W3C RDF/OWL representation of WordNet. Together with Birte Lönneker-Rodman (International Computer Science Institute der University of California, Berkeley), I am now exploring possibilities to convert the Hamburg Metaphor Database into my RDF/OWL Format.
  • Project: User Adaptive Interfaces ( 01.05.2003 - 30.04.2008 )

    This project was part of European Network of Excellence on Intelligent Technologies for Smart Adaptive Systems (EUNITE) to review the state of the art in user adaptive search interfaces and to initiate and intensify research collaboration between different research communities and the industry. During the last years several approaches have been developed that tackle specific problems of the retrieval process, e.g. feature extraction methods for multimedia data, problem specific similarity measures and interactive user interfaces. These methods enable the design of efficient retrieval tools if the user is able to provide an appropriate query. However, in most cases the user needs several steps in order to find the searched objects. The main reasons for this are on the one hand, the problem of users to specify their interests in the form of a well defined query (which is partially caused by inappropriate user interfaces), on the other hand, the problem of extracting relevant features from the (multimedia) objects. To improve today’s retrieval tools and thus the overall satisfaction of a user, it is necessary to develop methods that are able to support the user in the search process, e.g. by providing additional information about the search results as well as the data collection itself and also by adapting the retrieval tool to the users needs and interests. I implemented different tools and applications.
  • Project: ELDIT (Elektronisches Lernerwörterbuch Deutsch-Italienisch) ( 2003 )



    The ELDIT program (Electronic learners Dictionary German Italian) is an online platform for beginning to intermediate language learners. It is targeted at German native speakers that wish to learn Italian and Italian native speakers that wish to learn German. Currently ELDIT includes a learners' dictionary, a text corpus (texts of the so-called exams in bilingualism), a grammar module and interactive quizzes.
  • Project: PEACH ( 2002 )



    PEACH is a project that addresses these and other questions that individuals around the world ask themselves every day when visiting a cultural institution. The project objective is that of studying and experimenting with advanced technologies that can enhance cultural heritage appreciation by creating an interactive and personalized guide. The aim is that of developing and using innovative technology to provide an educational and entertaining experience fit for each individual's background, needs and interests. PEACH is funded by the Autonomous Province of Trento under the Fondo Unico program. The two major partners are ITC-IRST and DFKI, with a number of other research institutions participating in the consortium.
  • Project: Renaissance ( 2001 )



    Virtual Renaissance Court is a research and development project funded by the European Commission as part of the framework of the IST programme. The project is being directed by Italian electronic publisher Giunti Multimedia, and involves the German virtual community specialists, Blaxxun Interactive, the Swedish game publishers, Iridon Interactive, and the Italian research institute Istituto Trentino di Cultura (ITC-IRST). The aim of the Renaissance project is to develop a new genre of edutainment applications featuring a high quality graphical interface, networked co-operative environments, scientifically validated contents and an innovative pedagogical approach. This enables the reproduction of historically fascinating environments while using the appealing interface of a game in order to teach history. The project foresees the development of a prototype reproducing life in a Renaissance court, a 3D multi player Internet application where the users can play the role of different courtiers at the same court.
  • Project: M-Piro (Multilingual Personalized Information Objects) ( 2001 )



    Multilingual information delivery via the web for museum settings with personalized descriptions of museum exhibits with six international partners. The M-PIRO software creates descriptions of museum artefacts automatically, using information stored in a database. Children, adults and experts all receive slightly different, personalised, descriptions. The system also remembers what viewers have looked at before in order to avoid repetition and to draw comparisons with previously seen objects. Currently the technology can generate text in English, Italian or Greek.
  • Tool: CARSA - A Context Adaptive Retrieval System Architecture ( 2005 - 2008 )



    CARSA is a web services based architecture, which supports the development of context based information retrieval systems. The idea of these systems is to support users in the search process by, e.g., adapting the search results as well as the interface itself to user specific needs and interests. CARSA's data and program structure was designed based on web services and XML. This allows to easily integrate and combine different methods to search data locally or on the web (meta-searcher functionality), to modify or categorize search results, and to develop visualization methods on diverse clients (e.g. desktop PCs or mobile devices). In addition, CARSA contains a testing environment for evaluating single methods, e.g. classification or clustering methods, or their combinations with standard performance measures on benchmark data sets. In the research group in Magdeburg, I contributed in designing and implementing the system architecture. Furthermore, I designed my own plug-ins for indexing, categorizing and visualizing information.
  • Tool: RDF/OWL LexiRes ( 2006 - 2008 )



    I implemented the RDF/OWL LexiRes as a visualization tool for handling structures of ontological and lexical databases. The main idea of this tool is to give authors the possibility to navigate ontology hierarchies in order to restructure them, by manual or automatic merging, adding or deleting word senses. The tool is implemented in Java and uses the Jena Semantic Web Framework for querying and retrieving lexical data.
  • Tool: MultiLexExplorer ( 2006 - 2008 )



    MultiLexExplorer is a tool that combines a knowledge-driven word sense disambiguation with a knowledge-based text retrieval approach in an interactive framework. Lexical resources are used in order to disambiguate documents (retrieved from the web or a local document collection) given the different meanings (retrieved from the lexical resources, in our case EuroWordNet) of a search term having unambiguous description in different languages. The focus is especially on the integration of methods that support the adaptation of the system interface and the output to the current search context. This tool has been a joint work with a student, who concluded then his diploma-thesis on this task.
  • Tool: Multilingual Sense Folder Interface ( 2003 - 2008 )



    The Multilingual Sense Folder Interface is an interface for supporting users in an interactive semantic-based search process. Semantic classes created from lexical resources are used for this purpose and combined with word sense disambiguation, multilingual text retrieval and document categorization techniques. The focus is on the problem of browsing and navigating information and the related different word senses of a query for filtering only the relevant documents related to these meanings. Moreover, the user is supported by named-entity recognition, spell checking and stemming methods. These components have been integrated in the user interface. I implemented this interface as a part of my doctoral thesis.