However, while having internet access recognized as a human right might go some way towards addressing the digital divide issue, the theoretical case for recognition has not been clearly established. Without a solid theoretical case, recognizing something to be a human right is a misunderstanding of the nature of that something as well as of human rights.
The former kind of misunderstanding may result in misdirected efforts at promoting the activity in question and the latter in a debasement of human rights. This paper will provide an account of human rights and will argue that on the basis of such account, internet access is not a human right, even though it is an important right in itself and one that enables the promotion of other human rights. This study investigates how government support influences the performance of e-business companies.
Drawing on previous studies, funding support for technology development and marketing support, currently accounting for the biggest part of the support provided by the Korean government to the e-business sector, were selected as independent variable. Meanwhile, performance indicators specific to e-business such as human resources development, competitiveness enhancement, profitability, and growth in technology assets were chosen as dependent variable.
The data was collected through a survey of CEOs and executives of e-business companies that had received or were receiving government for technology development had a positive influence on competitiveness enhancement, profitability, and technology assets growth.
Marketing support, while it had a significant influence on competitiveness enhancement and technology assets growth, proved to have no measurable effect on profitability. This paper is a study of the use of Twitter by automated agents, based on data sampled in July-September Ideas are suggested for ways in which Twitter might defend against some common types of automated Twitter spam. The paper ends by outlining some general conclusions for designers of social information systems. In addition, many studies have demonstrated that patients should have easy access to their own health information as well as to any information they need in order to make decisions about their own heath care.
However, while there are a variety of tools for managing and sharing medical information, no integrated tool for health information management and sharing has been developed. Satisfying this challenge requires a means to capture and interconnect information from various sources which are relevant to one patient and create personal health space containing links to the health information that are related to the customer or of which the customer is interested in. In this paper we describe our work on developing a personal health assistant, which integrates the tools supporting personal health records, information therapy and health oriented blogs.
Technically the personal health assistant is based on knowledge management technologies, and it is easily extensible to capture additional e-health tools. Electronic contracts e-contracts usually describe cross-organizational business processes defining electronic services to be provided and consumed as well as constraints on service execution such as, for instance, Quality of Service QoS.
Due to market dynamism, it is common that organizations involved in a cooperation need to do some adjustments in a pre-established e-contract. These changes should be allowed through renegotiation of contractual clauses after the e-contract is already signed and being enacted. In this paper, feature modeling is used to represent electronic services e-services , QoS attributes and control operations to be applied when QoS attribute levels are not met.
In addition, an execution environment is proposed to support contract establishment, business process execution, service monitoring and contract renegotiation. It has become apparent to many security researchers that traditional security approaches are not sufficient to provide adequate security for today's pervasive electronic business environment. We and others argue that security is a socio-technical problem in which its social components are not sufficiently addressed or understood. An interpretive approach is employed and general inductive coding process is used to analyse the collected data.
These aspects include perceptions and concerns as well knowledge of and interaction with other stakeholders. UPCITY tracks the stages of a local community problem-solving workflow on an interactive map, by using a zoomable user interface, as well as a timeline to add a temporal dimension to data present in the system. Usability related features, as well as interoperability with popular social networks, are used to encourage citizen participation.
We provide an extensible platform by means of a flexible plug-in system, exemplified by an epidemic tracker. E-government services require strong methods of identification and authentication in order to protect personal rights and to comply with corresponding laws. The requirements for the authentication process can be fulfilled by electronic signatures. Identification in e-government applications often relies on government-issued identifiers provided by electronic identity eID cards. An eID card with signature creation capabilities is typically called Citizen Card.
The Information Cards technology, a recently introduced user-centric identity management framework, gains more and more importance if the field of eID. Expecting a high importance of Information Cards in the future, it would be very reasonable to utilize them for e-government services. In this paper we present an approach to use Citizen Cards together with Information Cards for identification and authentication in e-government services. In recent years, providers of e-business software have started tailoring their solutions to the needs of SMEs, e.
It is equipped with different search algorithms and offers an e-business competence calculator. The paper introduces the tool and thereby focuses on the methods and concepts to match the offers of e-business suppliers and SME needs. Electronic commerce eCommerce is expected to play an increasingly important role in the 21st century global market. The findings help to explore the eCommerce status, major contributing factors, and unique values of eCommerce for China.
The findings should also be important for studying eCommerce trends in the emerging markets. This paper shows how e-government can, or might even have to, be considered as a public policy transformation. In the process of merging authorities into new organisations public policies on e-government appeared as a key activity. The case study presented in the paper is the formation of the new Swedish Transport Agency formed out of several formerly independent authorities. The Swedish case study is a mature public administration and basic democratic core values.
The main contribution from the case study is to point out the importance of translation of policies into organizational practices. Social networks are known to stimulate the exchange and sharing of information among peers. Even more social networks can initiate a cooperation e. However, social networks are not widely used as work resources e. This paper describes how collaboration can be coordinated in social networks. The proposed way to achieve this is based on the usage of a set of activity lists of social network members. An activity list specifies all personal activities required to reach a collaborative output.
Based on the activity lists a process model can be generated that controls and analyzes the coordination. Activities requiring collaboration are performed using social network. The approach is illustrated with a use case. As with the advancement in Web-based infrastructures applications can be composed of services by different providers across the Internet, it is not possible to foresee legal requirements for every situation.
Therefore, new legal challenges arise for modular applications in an Internet of Services. However, since such service based systems become more and more self describing by using sophisticated description schemas, we propose to apply standard legal methodology on this situation. By formalizing legal norms and the process of legal assessment to obtain legal rights and obligations we envision an autarchic system which can subsume service description facts under the terms of legal regulations in order to obtain legal consequences.
This paper contributes the scientific concept to transfer legal methodology, as known in the offline world for decades, to a distributed and modular online business world, which composes its applications dynamically with services from different providers. Today, information overload and the lack of systems that provide employees with the right knowledge and skills are common challenges that large organisations face. This can lead to knowledge workers re-inventing the wheel due to problems in the retrieval of information from both internal and external sources.
Web 2. This paper describes the benefits and constraints associated with the use of Web 2.
- Recommended for you?
- Recommender systems for information providers: designing customer centric paths to information.
- Thomas Aquinas Trinitarian Theology: A Study in Theological Method.
- How to Create a Customer Centric Strategy For Your Business.
- Building Comprehension - Grade 8.
- Recommender Systems for Information Providers: Designing Customer Centric Paths to Information.
- My Berlin Kitchen: A Love Story (with Recipes).
A number of landscape overview models are presented here that attempt to describe the effect of using Web 2. An organisation, active in the construction industry, is the focus of a case study where Web 2. The effect of cultural values in IT adoption has attracted growing interest in the last years.
Researchers posit that cultural values can shed some additional light on the factors that determine IT user acceptance and use. In this research in progress work, the authors propose a model based on previous user acceptance theories to develop a research study to inquire the role that individual cultural values play on the adoption of those social networks features that threat user's privacy the most.
What the authors posit is that adoption of those features that are more critical from the point of view of users' privacy can be explained from the perspective of individual's cultural values. In this preliminary work, the authors have developed the model and have drawn a set of hypotheses.
In the following steps of the research the authors are going to develop a survey to start the quantitative research. Such topics as smart home, smart car have become widespread recently.
The paper presents an innovative approach to context-oriented knowledge management in the smart space. The smart space consists of a set of devices that can interact with each other, exchange information and services. Knowledge management in such systems allows coordinating activities of a large amount of entities which can communicate within the smart space. To understand the adoption of collaborative systems, it is of great importance to know about economical effects of collaboration itself.
In this context information technology may help a firm to create sustaining competitive advantages over competitors. It is less clear whether collaboration is of any use in such an environment. According to the Economics literature, the most important factors affecting benefits of collaboration are market structure, kind and degree of uncertainty faced by the firms, their risk preferences and the collaboration propensity. The results depend on the way these factors are combined.
We present a microeconomic model and use techniques from game theory for the analysis.
About This Item
The way the model is constructed will allow the derivation of closed-form solutions. Traditional learning models can't represent individualities in a social system, or else they represented all of them in the same way — i. Results indicating whether collaboration in various areas makes sense will be obtained.
This makes it possible to judge the potential of available collaborative technology.
VTLS Vectors iPortal Gangguan Komunikasi Berlaku.
The basic presented model may be extended in various ways. Beside occasional disastrous impacts of weather, weather also affects daily life. The accuracy of weather and climatic information is, however, limited by spatial and temporal borders that need to be overriden. Also, weather information services cannot be fully customized, a problem arising from the spatial inaccuracy of weather forecasts and observations. Here, the role of social media, collective and civic intelligence and crowd sourcing should be investigated.
This paper envisions a community of weather-interested users that provide usable observations of weather and environmental change, and presents a web-based interface for this community as a new method to collect weather and climatic information. User-generated weather observations can be processed based on principles of collective intelligence and co-creation, in order to improve, customize and personolize weather information.
It is also essential to provide the tools that allow citizens to correctly identify the services they need. In this paper we will discuss how it is possible to improve e-gov service delivery by using human language technologies. We argue that these technologies can contribute to: deliver services in more inclusive manners; provide human centered and multilingual service and support; and include non-structured information scattered across different sources.
The unprecedented growth in service-based business processes over a short period of time has underscored the need for understanding the mechanisms and theorising the business models and business process management adopted across many organisations today. This is more evident within the Irish health sector. This research summarises a survey of the literature and argues that the inability of current Business Process Management BPM techniques to visualise and monitor web-enabled business processes prevents us from transforming information on network activity and infrastructures.
This paper presents an empirical study on machine learning-based sentiment analysis. Though polarity classification has been extensively studied at different document-structure levels e. We systematically analyze four different English subjectivity resources for the task of sentiment polarity identification.
While the results show that the size of dictionaries clearly correlate to polarity-based feature coverage, this property does not correlate to classification accuracy. Based on the findings of the English-based experimental setup, a new German subjectivity resource is proposed for the task of German-based sentiment analysis. It is a RS based on the semantic description of the domain, which allows the system to work with any domain as long as the data of this domain can be defined through an ontology representation.
Through the GRSK configuration process, it is possible to select which techniques to use and to parameterize different aspects of the recommendation process, in order to adjust the GRSK behavior to the particular application domain. The experimental results will show that GRSK can be successfully used with different domains. The enormous offer of user-generated content on the internet and its continuous growth make the selection process increasingly difficult for end-users. This abundance of content can be handled by a recommendation system that observes user preferences and assists people by offering interesting suggestions.
However, present-day recommendation systems are optimized for suggesting premium content and partially lose their effectiveness when recommending user-generated content. The transitoriness of the content and the sparsity of the data matrix are two major characteristics that influence the effectiveness of the recommendation algorithm and in which premium and user-generated content systems can be distinguished. Therefore, we developed an advanced collaborative filtering algorithm which takes into account the specific characteristics of user-generated content systems.
As a solution to the sparsity problem, inadequate profiles will be extended with the most likely future consumptions. These extended profiles will increase the profile overlap probability, which will increase the number of neighbours in a collaborative filtering system. In this way, the personal suggestions are based on an enlarged group of neighbours, which makes them more precise and diverse than traditional collaborative filtering recommendations.
This paper explains in detail the proposed algorithm and demonstrates the improvements on standard collaborative filtering algorithms.
An automatic annotation method for annotating text with semantic labels is proposed for question answering systems. The approach first extracts the keywords from a given question. Semantic label selection module is then employed to select the semantic labels to tag keywords. In order to distinguish multi-senses and assigns best semantic labels, a Bayesian based method is used by referring to historically annotated questions. If there is no appropriate label, WordNet is then employed to obtain candidate labels by calculating the similarity between each keyword in the question and the concept list in our predefined Tagger Ontology.
A bottleneck of constructing location-based web searches is that most web-pages do not contain any explicit geocoding such as geotags. Alternative solution can be based on ad-hoc georeferencing which relies on street addresses, but the problem is how to extract and validate the address strings from free-form text. We propose a rule-based solution that detects address-based locations using a gazetteer and street-name prefix trees created from the gazetteer.
To automatically classify and process web pages, current systems use the textual content of those pages, including both the displayed content and the underlying HTML code. However, a very important feature of a web page is its visual appearance. In this paper, we show that using generic visual features we can classify the web pages for several different types of tasks. The features used in this document are simple color and edge histograms, Gabor and texture features.
These were extracted using an off-the-shelf visual feature extraction method. In three experiments, we classify web pages by their aesthetic value, their recency and the type of website. Results show that these simple, global visual features already produce good classification results. We also introduce an online tool that uses the trained classifiers to assess new web pages. Nowadays, semantics is one of the greatest challenges in IR systems evolution, as well as when it comes to semi- structured IR systems which are considered here.
- Building Valve Amplifiers!
- The Econosphere: What Makes the Economy Really Work, How to Protect It, and Maximize Your Opportunity for Financial Prosperity.
- Recommender Systems for Meta Library Catalogs - Information Services and Electronic Markets.
- A-6 Intruder: US Navy Bomber and Tanker Versions.
- dblp: Andreas W. Neumann!
Usually, this challenge needs an additional external semantic resource related to the documents collection. In order to compare concepts and from a wider point of view to work with semantic resources, it is necessary to have semantic similarity measures. Similarity measures assume that concepts related to the terms have been identified without ambiguity. Therefore, misspelled terms interfere in term to concept matching process. We choose to deal with this last aspect and we suggest a way to detect and correct misspelled terms through a fuzzy semantic weighting formula which can be integrated in an IR system.
In order to evaluate expected gains, we have developed a prototype which first results on small datasets seem interesting. Keim, Martin Atkinson and William Ribarsky. This paper presents a visual analytics approach to explore large news article collections in the domains of polarity and spatial analysis. The exploration is performed on the data collected with Europe Media Monitor EMM , a system which monitors over online sources and processes 90, articles per day.
By analyzing the news feeds, we want to find out which topics are important in different countries and what is the general polarity of the articles within these topics. To assess the polarity of a news article, automatic techniques for polarity analysis are employed and the results are represented using Literature Fingerprinting for visualization. In the spatial description of the news feeds, every article can be represented by two geographic attributes, the news origin and the location of the event itself.
In order to assess these spatial properties of news articles, we conducted our geo-analysis, which is able to cope with the size and spatial distribution of the data. Spatial analysis of the news articles data employs Pixel Placement and Cartograms technique to deal with these challenges. Within this application framework, we show opportunities how real-time news feed data can be analyzed efficiently. This work presents a data-driven definition question answering QA system that outputs a set of temporally anchored definitions as answers.
This system builds surface language models on top of a corpus automatically acquired from Wikipedia abstracts, and ranks answer candidates in agreement with these models afterwards. Additionally, this study deals at greater length with the impact of several surface features in the ranking of temporally anchored answers. Social applications are prone to information explosion, due to the proliferation of user generated content.
Locating and retrieving information in their context poses, therefore, a great challenge. Classical information retrieval methods are, however, inadequate in this environment, and users inevitably drown in an information flood. This is achieved through an information valuation method, that estimates how likely it is for each information item to be accessed in the near future. The experiments verify that our method performs significantly better than others typically used in social applications, while being more versatile, too.
In the Internet economy, it has become a crucial task of electronic business to monitor and optimize websites, their usage and online marketing success. Web analytics, which is defined as the measurement, collection, analysis and reporting of Internet data, is an effective instrument of website management. First, this paper describes the technical functionality and use of web analytics and discusses different web metrics.
Second, a fuzzy web analytics approach is proposed, which makes it possible to classify metrics precisely into more than one class at the same time. Third, a fuzzy web metrics index has been developed for multidimensional, intelligent web analysis. Fuzzy logic enables computing with words and more intuitive, human-oriented queries, segmentation and descriptions of metrics in natural language. Finally, a web analytics framework is suggested to analyze and control key performance indicators in a web controlling loop.
To ensure the quality of adaptive contents, there should be continuous testing during the development phase. One of the most important reasons to empirically test the content during the development phase is the balance of the adaptive framework. Empirical testing is time-consuming and in many cases several iterative cycles are needed.follow url
How to Create a Customer Centric Strategy For Your Business
In we started to develop methods of testing in a computational test bench. The idea to speed up the production process was based on software agents that could behave like real user community. The study shows that we can construct very reliable artificial behaviour when comparing it to human behaviour in group level. This paper presents CORD, a hybrid clustering system, which combines modifications of three modern clustering approaches to create a hybrid solution, that is able to efficiently process very large sets of ordinal data.
The Self-organizing Maps algorithm for categorical data by Chen and Marques is hereby used for a rough preclustering for finding the initial position and number of centroids. The main clustering task utilizes a k-modes algorithm and its fuzzy set extension described by Kim et al. Both algorithms profit from this symbiosis as their iterative computations can be done on data, that is fully held in main memory. Combining these approaches, the resulting system is able to extract significant information even from very large datasets efficiently.
The presented reference implementation of the hybrid system shows good results. The aim is clustering and visual analyzing large amounts of user profiles. This should help in understandingWeb user behavior and personalize advertisement. Really Simple Syndication RSS information feeds present new challenges to information retrieval technologies. In this paper we propose a RSS feeds retrieval approach which aims to give for an user a personalized view of items and making easier the access to their content. In our proposal, we define different filters in order to construct the vocabulary used in text describing items feeds.
This filtering takes into account both the lexical category and the frequency of terms. The set of items feeds is then represented in a m-dimensional vector space. The k-means clustering algorithm with an adapted centroid computation and a distance measure is applied to find automatically clusters. The clusters indexed by relevant terms can so be refined, labeled and browsed by the user.
IN ADDITION TO READING ONLINE, THIS TITLE IS AVAILABLE IN THESE FORMATS:
We experiment the approach on a collection of items feeds collected from news sites. The resulting clusters show a good quality of their cohesion and their separation. This provides meaningful classes to organize the information and to classify new items feeds. The rapid development of WWW, information technology and e-commerce has made the Internet forums, e-opinion portals and personal blogs widely accessible to consumers.
As a result, nowadays it has become extremely popular for consumers to share their experience, point out their preferences and concerns with respect to a specific product on Web. These online customer reviews possess vital information that product designers can gain insights of their customers and products, and make improvements accordingly. However, the sheer amount of data, their distributed locations and the inherent ambiguity of human language have challenged designers greatly.
Meanwhile, we also highlight the challenges and relevant research issues in order to fulfil such an ambition. As a pioneer study, we believe that this research will greatly help designers in the era of global competition and e-commerce. In this paper we present the benefits of using a multi-agents system to manage the data placement in a decentralized storage application. In our model, after a fragmentation step, each piece of data is associated to a mobile agent making its own decisions. Each agent follows simple rules and the emerging behavior is a flock of fragments.
To provide an efficient load-balancing, agents drop pheromones among network peers. We made some experiments to measure the cohesion degree of our flock and to measure the network coverage of a flock. We also discuss about availability and reliability of our approach. The advent of the Social Web has created massive online media through turning the former information consumers to present information producers.
The best example is the blogosphere. Blog websites are a collection of articles written by millions of blog writers to millions of blog readers. Blogging has become a very popular means for Web 2. However, as a consequence to the very massive number of blogs as well as the so diverse topics of blog posts available on the Web, most blog search engines encounter the serious challenge of finding the blog articles that are truly relevant to the certain topic that blog readers may look for.
When smart consumer strategies generate new demand, often the health system does not have sufficient capacity and efficient clinical operations and processes to accommodate that hard-earned volume nor deliver on the consumer-centric promise with standardization and reliability. Operational excellence becomes more challenging with increased industry consolidation. Health systems are growing in size, level of variation, and complexity, leading to operational fragmentation that works against the consumerism agenda of standardization and personalization.
Providing a reliable and consistent experience is critical to gaining consumer loyalty. This level of standardization requires a holistic, balanced approach to operations driven from the top down and across the organization. To do so, health system leaders will need to leverage one of the most powerful strategic assets at their disposal: data.
Here are a few ways in which new methods of analyzing and harnessing data can help health systems deliver operationally on their promise of patient-centric service. Consumers now expect the same high level of service and personalization from healthcare organizations as they get from leading non-healthcare companies. Instead, they are often frustrated by a lack of convenience, transparency, flexibility, and respect for their time. Common complaints include lack of online appointment-scheduling; lengthy waits for an appointment slot; having to provide the same information at registration that they did during scheduling; delayed appointment starts; uncertain results availability; and lack of clarity about the cost of their care.
Organizations recognize that they need to assess these current barriers to consumer satisfaction. Applying advanced and predictive analytics to existing data can help them understand what to solve for and identify the actionable next steps, such as:. Consumerism is also about personalizing patient access and engagement. Two patients with the same diagnosis do not necessarily have the same needs and preferences. Analyzing data on specific patient segments can help organizations structure operations to align clinical care services and operational processes to provide timely and on-demand patient access to the appropriate services.
The goal is to enhance convenience, choice and personalization while still ensuring that core care delivery is standardized, efficient and cost effective. Patient-centric organizations are focused on building care pathways that provide a seamless experience for all patients at every access point. As providers design their networks, data will key to be informing locations for care, strategies to meet supply-demand fluctuations, and opportunities to differentiate the patient experience.
Written by TullioSiragusa May 22, Engaging with customers, partners and each other as individuals has fundamentally changed and the power of decisions is the individual. Before the consumer revolution and expansion of social media platforms and communities, we did not have the ability to connect and analyze at this level. Personal relationships with brands and products existed, but were not visible to others - as is now possible with the digital economy and social media.
While it is possible today to extract value from the opportunity it is critical to have both a business strategy CMO and big data analytic solutions CIO working together to support growing the business - one person at a time. Banks have been leaders in analytics for decades, yet they have not fully realized the benefits — until now. Customers are expecting a more personalized service across all industries, and banks are not immune. All these factors are making it increasingly more difficult for banks to stay relevant and turn a profit.
Unlocking the contextual insights in the data to better understand customers represents a significant opportunity to gain a competitive advantage, and fundamentally change the way decisions are made for commercial gain. Big Data should be about changing the way you do business to harness the real value in your data, re-shape the interaction with the market, and increase the relationship value with your customers.
Therefore, which data is required to achieve these objectives, who needs it, and how often, are key big data decisions to consider, especially when multiple data sources, coupled with geo-spatial data, social media, emails, call center transcripts, and other unstructured data all play a part in knowing the customer today, and tomorrow.
Both internal and external data, structured and unstructured should enable financial services firms to personalize their products to each customer and tie in enhancements needed across the organization to support better customer-centric performance. In resources planning and alignment, big data solutions for financial services is not only about customer segmentation of one, but also about leveraging existing assets in such a way to reduce costs of infrastructure deployment.
Financial services companies are using big data today to focus on operational issues — risk, efficiency, compliance, security and better decision making, however there is a growing need to identify how big data is going to be used for innovative profit growth. The value of big data is the ability to triangulate all these disparate activities into a more holistic approach to managing customer relationships. If you have a set of customers that you've missed communicating proper disclosures to, you have a possible Dodd-Frank compliance issue, right?
Offering this consumer a new product, might not be such a good idea until you resolve the compliance, or expectations issue.