You searched for subject:(Computer science)
.
Showing records 1 – 30 of
40882 total matches.
◁ [1] [2] [3] [4] [5] … [1363] ▶

University of Helsinki
1.
Tripathi, Abhishek.
Data fusion and matching by maximizing statistical dependencies.
Degree: Department of Computer Science; Helsinki Institute for Information Technology HIIT, 2011, University of Helsinki
URL: http://hdl.handle.net/10138/24569
► The core aim of machine learning is to make a computer program learn from the experience. Learning from data is usually defined as a task…
(more)
▼ The core aim of machine learning is to make a computer program learn from the experience. Learning from data is usually defined as a task of learning regularities or patterns in data in order to extract useful information, or to learn the underlying concept. An important sub-field of machine learning is called multi-view learning where the task is to learn from multiple data sets or views describing the same underlying concept. A typical example of such scenario would be to study a biological concept using several biological measurements like gene expression, protein expression and metabolic profiles, or to classify web pages based on their content and the contents of their hyperlinks.
In this thesis, novel problem formulations and methods for multi-view learning are presented. The contributions include a linear data fusion approach during exploratory data analysis, a new measure to evaluate different kinds of representations for textual data, and an extension of multi-view learning for novel scenarios where the correspondence of samples in the different views or data sets is not known in advance. In order to infer the one-to-one correspondence of samples between two views, a novel concept of multi-view matching is proposed. The matching algorithm is completely data-driven and is demonstrated in several applications such as matching of metabolites between humans and mice, and matching of sentences between documents in two languages.
Koneoppimisessa pyritään luomaan tietokoneohjelmia, jotka oppivat kokemuksen kautta. Tehtävänä on usein oppia tietoaineistoista säännönmukaisuuksia joiden avulla saadaan uutta tietoa aineiston taustalla olevasta ilmiöstä ja voidaan ymmärtää ilmiötä paremmin. Eräs keskeinen koneoppimisen alahaara käsittelee oppimista useita samaa ilmiötä käsitteleviä tietoaineistoja yhdistelemällä. Tavoitteena voi olla esimerkiksi solutason biologisen ilmiön ymmärtäminen tarkastelemalla geenien aktiivisuusmittauksia, proteiinien konsentraatioita ja metabolista aktiivisuutta samanaikaisesti. Toisena esimerkkinä verkkosivuja voidaan luokitella samanaikaisesti sekä niiden tekstisisällön että hyperlinkkirakenteen perusteella.
Tässä väitöskirjassa esitellään uusia periaatteita ja menetelmiä useiden tietolähteiden yhdistelemiseen. Työn päätuloksina esitellään lineaarinen tietoaineistojen yhdistelemismenetelmä tutkivaan analysiin, uusi menetelmä tekstiaineistojen erilaisten esitystapojen vertailuun sekä uusi yhdistelemisperiaate tilanteisiin joissa aineistojen näytteiden vastaavuutta toisiinsa ei tunneta ennalta. Työssä esitetään kuinka vastaavuus voidaan oppia tietoaineistoista itsestään, ilman ulkopuolista ohjausta. Uutta menetelmää sovelletaan työssä esimerkiksi hakemaan vastaavuuksia ihmisten ja hiirten metaboliamittauksista sekä etsimään samaa merkitseviä lauseita kahdella eri kielellä kirjoitetuista teksteistä.
Subjects/Keywords: computer Science; computer Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tripathi, A. (2011). Data fusion and matching by maximizing statistical dependencies. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/24569
Chicago Manual of Style (16th Edition):
Tripathi, Abhishek. “Data fusion and matching by maximizing statistical dependencies.” 2011. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/24569.
MLA Handbook (7th Edition):
Tripathi, Abhishek. “Data fusion and matching by maximizing statistical dependencies.” 2011. Web. 27 Feb 2021.
Vancouver:
Tripathi A. Data fusion and matching by maximizing statistical dependencies. [Internet] [Doctoral dissertation]. University of Helsinki; 2011. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/24569.
Council of Science Editors:
Tripathi A. Data fusion and matching by maximizing statistical dependencies. [Doctoral Dissertation]. University of Helsinki; 2011. Available from: http://hdl.handle.net/10138/24569

University of Helsinki
2.
Polishchuk, Tatiana.
Enabling Multipath and Multicast Data Transmission in Legacy and Future Internet.
Degree: Department of Computer Science; HIIT, 2013, University of Helsinki
URL: http://hdl.handle.net/10138/40248
► The quickly growing community of Internet users is requesting multiple applications and services. At the same time the structure of the network is changing. From…
(more)
▼ The quickly growing community of Internet users is requesting multiple applications and services. At the same time the structure of the network is changing. From the performance point of view, there is a tight interplay between the application and the network design. The network must be constructed to provide an adequate performance of the target application.
In this thesis we consider how to improve the quality of users' experience concentrating on two popular and resource-consuming applications: bulk data transfer and real-time video streaming. We share our view on the techniques which enable feasibility and deployability of the network functionality leading to unquestionable performance improvement for the corresponding applications.
Modern mobile devices, equipped with several network interfaces, as well as multihomed residential Internet hosts are capable of maintaining multiple simultaneous attachments to the network. We propose to enable simultaneous multipath data transmission in order to increase throughput and speed up such bandwidth-demanding applications as, for example, file download. We design an extension for Host Identity Protocol (mHIP), and propose a multipath data scheduling solution on a wedge layer between IP and transport, which effectively distributes packets from a TCP connection over available paths. We support our protocol with a congestion control scheme and prove its ability to compete in a friendly manner against the legacy network protocols. Moreover, applying game-theoretic analytical modelling we investigate how the multihomed HIP multipath-enabled hosts coexist in the shared network.
The number of real-time applications grows quickly. Efficient and reliable transport of multimedia content is a critical issue of today's IP network design. In this thesis we solve scalability issues of the multicast dissemination trees controlled by the hybrid error correction. We propose a scalable multicast architecture for potentially large overlay networks. Our techniques address suboptimality of the adaptive hybrid error correction (AHEC) scheme in the multicast scenarios. A hierarchical multi-stage multicast tree topology is constructed in order to improve the performance of AHEC and guarantee QoS for the multicast clients. We choose an evolutionary networking approach that has the potential to lower the required resources for multimedia applications by utilizing the error-correction domain separation paradigm in combination with selective insertion of the supplementary data from parallel networks, when the corresponding content is available.
Clearly both multipath data transmission and multicast content dissemination are the future Internet trends. We study multiple problems related to the deployment of these methods.
Internetin nopeasti kasvava käyttäjäkunta vaatii verkolta yhä enemmän sovelluksia ja palveluita. Samaan aikaan verkon rakenne muuttuu. Suorituskyvyn näkökulmasta on olemassa selvä vuorovaikutussovellusten ja verkon suunnittelun välillä. Verkko on rakennettava siten,…
Subjects/Keywords: computer Science; computer Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Polishchuk, T. (2013). Enabling Multipath and Multicast Data Transmission in Legacy and Future Internet. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/40248
Chicago Manual of Style (16th Edition):
Polishchuk, Tatiana. “Enabling Multipath and Multicast Data Transmission in Legacy and Future Internet.” 2013. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/40248.
MLA Handbook (7th Edition):
Polishchuk, Tatiana. “Enabling Multipath and Multicast Data Transmission in Legacy and Future Internet.” 2013. Web. 27 Feb 2021.
Vancouver:
Polishchuk T. Enabling Multipath and Multicast Data Transmission in Legacy and Future Internet. [Internet] [Doctoral dissertation]. University of Helsinki; 2013. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/40248.
Council of Science Editors:
Polishchuk T. Enabling Multipath and Multicast Data Transmission in Legacy and Future Internet. [Doctoral Dissertation]. University of Helsinki; 2013. Available from: http://hdl.handle.net/10138/40248

University of Helsinki
3.
Entner, Doris.
Causal Structure Learning and Effect Identification in Linear Non-Gaussian Models and Beyond.
Degree: Department of Computer Science, 2013, University of Helsinki
URL: http://hdl.handle.net/10138/41673
► In many fields of science, researchers are keen to learn causal connections among quantities of interest. For instance, in medical studies doctors want to infer…
(more)
▼ In many fields of science, researchers are keen to learn causal connections among quantities of interest. For instance, in medical studies doctors want to infer the effect of a new drug on the recovery from a particular disease, or economists may be interested in the effect of education on income.
The preferred approach to causal inference is to carry out controlled experiments. However, such experiments are not always possible due to ethical, financial or technical restrictions. An important problem is thus the development of methods to infer cause-effect relationships from passive observational data. While this is a rather old problem, in the late 1980s research on this issue gained significant momentum, and much attention has been devoted to this problem ever since. One rather recently introduced framework for causal discovery is given by linear non-Gaussian acyclic models (LiNGAM). In this thesis, we apply and extend this model in several directions, also considering extensions to non-parametric acyclic models.
We address the problem of causal structure learning from time series data, and apply a recently developed method using the LiNGAM approach to two economic time series data sets. As an extension of this algorithm, in order to allow for non-linear relationships and latent variables in time series models, we adapt the well-known Fast Causal Inference (FCI) algorithm to such models.
We are also concerned with non-temporal data, generalizing the LiNGAM model in several ways: We introduce an algorithm to learn the causal structure among multidimensional variables, and provide a method to find pairwise causal relationships in LiNGAM models with latent variables. Finally, we address the problem of inferring the causal effect of one given variable on another in the presence of latent variables. We first suggest an algorithm in the setting of LiNGAM models, and then introduce a procedure for models without parametric restrictions.
Overall, this work provides practitioners with a set of new tools for discovering causal information from passive observational data in a variety of settings.
Monilla tieteenaloilla tutkijat etsivät syy-seuraussuhteita kiinnostavina pitämiensä muuttujien välillä. Suorimman lähestymistavan tähän tarjoavat satunnaistetut kontrolloidut kokeet: esimerkiksi kliinisissä kokeissa uuden lääkkeen vaikutusta johonkin sairauteen arvioidaan jakamalla potilaat satunnaisesti kahteen ryhmään, joista toiselle annetaan oikeaa lääkkeettä ja toiselle ainoastaan lumelääkkettä. Lääkkeen todellinen vaikutus selviää ryhmien tuloksia vertailemalla.
Monissa tapauksissa tällaiset kokeet eivät kuitenkaan ole mahdollisia. Esimerkiksi taloustieteilijöiden tutkiessa koulutuksen vaikutusta tuloihin, kokeeseen osallistuvien henkilöiden koulutustason suora määrääminen olisi sekä epäeettistä että käytännössä mahdotonta. Näin ollen tutkijat joutuvat usein turvautumaan passiivisesti kerättyyn (ei-kokeelliseen) havaintoaineistoon. Tällainen aineisto ei kuitenkaan välttämättä kerro suoraan kausaalisuhteista, sillä…
Subjects/Keywords: computer Science; computer Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Entner, D. (2013). Causal Structure Learning and Effect Identification in Linear Non-Gaussian Models and Beyond. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/41673
Chicago Manual of Style (16th Edition):
Entner, Doris. “Causal Structure Learning and Effect Identification in Linear Non-Gaussian Models and Beyond.” 2013. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/41673.
MLA Handbook (7th Edition):
Entner, Doris. “Causal Structure Learning and Effect Identification in Linear Non-Gaussian Models and Beyond.” 2013. Web. 27 Feb 2021.
Vancouver:
Entner D. Causal Structure Learning and Effect Identification in Linear Non-Gaussian Models and Beyond. [Internet] [Doctoral dissertation]. University of Helsinki; 2013. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/41673.
Council of Science Editors:
Entner D. Causal Structure Learning and Effect Identification in Linear Non-Gaussian Models and Beyond. [Doctoral Dissertation]. University of Helsinki; 2013. Available from: http://hdl.handle.net/10138/41673

University of Helsinki
4.
Galbrun, Esther.
Methods for Redescription Mining.
Degree: Department of Computer Science; Helsinki Institute for Information Technology (HIIT), 2013, University of Helsinki
URL: http://hdl.handle.net/10138/41710
► In scientific investigations data oftentimes have different nature. For instance, they might originate from distinct sources or be cast over separate terminologies. In order to…
(more)
▼ In scientific investigations data oftentimes have different nature. For instance, they might originate from distinct sources or be cast over separate terminologies. In order to gain insight into the phenomenon of interest, a natural task is to identify the correspondences that exist between these different aspects.
This is the motivating idea of redescription mining, the data analysis task studied in this thesis. Redescription mining aims to find distinct common characterizations of the same objects and, vice versa, to identify sets of objects that admit multiple shared descriptions.
A practical example in biology consists in finding geographical areas that admit two characterizations, one in terms of their climatic profile and one in terms of the occupying species. Discovering such redescriptions can contribute to better our understanding of the influence of climate over species distribution. Besides biology, applications of redescription mining can be envisaged in medicine or sociology, among other fields.
Previously, redescription mining was restricted to propositional queries over Boolean attributes. However, many conditions, like aforementioned climate, cannot be expressed naturally in this limited formalism. In this thesis, we consider more general query languages and propose algorithms to find the corresponding redescriptions, making the task relevant to a broader range of domains and problems.
Specifically, we start by extending redescription mining to non-Boolean attributes. In other words, we propose an algorithm to handle nominal and real-valued attributes natively. We then extend redescription mining to the relational setting, where the aim is to find corresponding connection patterns that relate almost the same object tuples in a network.
We also study approaches for selecting high quality redescriptions to be output by the mining process. The first approach relies on an interface for mining and visualizing redescriptions interactively and allows the analyst to tailor the selection of results to meet his needs. The second approach, rooted in information theory, is a compression-based method for mining small sets of associations from two-view datasets.
In summary, we take redescription mining outside the Boolean world and show its potential as a powerful exploratory method relevant in a broad range of domains.
Tieteellinen tutkimusaineisto kootaan usein eri termistöä käyttävistä lähteistä. Näiden erilaisten näkökulmienvälisten vastaavuuksien ja yhteyksien tunnistaminen on luonnollinen tapa lähestyä tutkittavaa ilmiötä.
Väitöskirjassa tarkastellaan juuri tähän pyrkivää data-analyysimenetelmää, jälleenkuvausten louhintaa (redescription mining). Jälleenkuvausten tavoitteena on yhtäältä kuvata samaa asiaa vaihoehtoisilla tavoilla ja toisaalta tunnistaa sellaiset asiat, joilla on useita eri kuvauksia.
Jälleenkuvausten louhinnalla on mahdollisia sovelluksia mm. biologiassa, lääketieteessä ja sosiologiassa. Biologiassa voidaan esimerkiksi etsiä sellaisia maantieteellisiä alueita, joita voidaan…
Subjects/Keywords: computer Science; computer Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Galbrun, E. (2013). Methods for Redescription Mining. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/41710
Chicago Manual of Style (16th Edition):
Galbrun, Esther. “Methods for Redescription Mining.” 2013. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/41710.
MLA Handbook (7th Edition):
Galbrun, Esther. “Methods for Redescription Mining.” 2013. Web. 27 Feb 2021.
Vancouver:
Galbrun E. Methods for Redescription Mining. [Internet] [Doctoral dissertation]. University of Helsinki; 2013. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/41710.
Council of Science Editors:
Galbrun E. Methods for Redescription Mining. [Doctoral Dissertation]. University of Helsinki; 2013. Available from: http://hdl.handle.net/10138/41710

University of Helsinki
5.
Bhattacharya, Sourav.
Continuous Context Inference on Mobile Platforms.
Degree: Department of Computer Science, 2014, University of Helsinki
URL: http://hdl.handle.net/10138/135640
► In this thesis we develop novel methods for continuous and sustained context inference on mobile platforms. We address challenges present in real-world deployment of two…
(more)
▼ In this thesis we develop novel methods for continuous and sustained context inference on mobile platforms. We address challenges present in real-world deployment of two popular context recognition tasks within ubiquitous computing and mobile sensing, namely localization and activity recognition. In the first part of the thesis, we provide a new localization algorithm for mobile devices using the existing GSM communication infrastructures, and then propose a solution for energy-efficient and robust tracking on mobile devices that are equipped with sensors such as GPS, compass, and accelerometer. In the second part of the thesis we propose a novel sparse-coding-based activity recognition framework that mitigates the time-consuming and costly bootstrapping process of activity recognizers employing supervised learning. The framework uses a vast amount of unlabeled data to automatically learn a sensor data representation through a set of extracted characteristic patterns and generalizes well across activity domains and sensor modalities.
Väitöskirjatyössä tarkastellaan kontekstien tunnistamista älypuhelimissa. Työssä esitellään uusia menetelmiä, jotka mahdollistavat tunnistamisalgoritmien käyttämisen pitkäkestoisesti osana joka päivän puhelinsovelluksia. Erityistarkastelussa ovat paikannus ja aktiviteettien tunnistus, kaksi keskeisintä kontekstintunnistuksen tutkimuksen osa-aluetta. Väitöskirjan ensimmäisessä osassa esitellään uusi GSM paikannusalgoritmi sekä kehitetään menetelmiä energiatehokkaaseen reitin ja paikan seurantaan älypuhelimissa. Väitöskirjan toisessa osassa kehitetään uusi lähestymistapa aktiviteettien tunnistamiseen käyttäen harvaa koodausta. Menetelmän periaatteena on etsiä yleisiä säännönmukaisuuksia havainnoista joiden luokkatietoa ei ole tiedossa. Löydettyjen säännönmukaisuuksien avulla voidaan johtaa uusi esitystapa havainnoille, joka mahdollistaa tunnistusalgoritmien kehittämisen merkittävästi vähäisemmän luokitellun datan avulla.
Subjects/Keywords: computer Science; computer Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bhattacharya, S. (2014). Continuous Context Inference on Mobile Platforms. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/135640
Chicago Manual of Style (16th Edition):
Bhattacharya, Sourav. “Continuous Context Inference on Mobile Platforms.” 2014. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/135640.
MLA Handbook (7th Edition):
Bhattacharya, Sourav. “Continuous Context Inference on Mobile Platforms.” 2014. Web. 27 Feb 2021.
Vancouver:
Bhattacharya S. Continuous Context Inference on Mobile Platforms. [Internet] [Doctoral dissertation]. University of Helsinki; 2014. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/135640.
Council of Science Editors:
Bhattacharya S. Continuous Context Inference on Mobile Platforms. [Doctoral Dissertation]. University of Helsinki; 2014. Available from: http://hdl.handle.net/10138/135640

University of Helsinki
6.
Kempa, Dominik.
Efficient Construction of Fundamental Data Structures in Large-Scale Text Indexing.
Degree: Department of Computer Science, 2015, University of Helsinki
URL: http://hdl.handle.net/10138/156516
► This thesis studies efficient algorithms for constructing the most fundamental data structures used as building blocks in (compressed) full-text indexes. Full-text indexes are data structures…
(more)
▼ This thesis studies efficient algorithms for constructing the most fundamental data structures used as building blocks in (compressed) full-text indexes. Full-text indexes are data structures that allow efficiently searching for occurrences of a query string in a (much larger) text. We are mostly interested in large-scale indexing, that is, dealing with input instances that cannot be processed entirely in internal memory and thus a much slower, external memory needs to be used. Specifically, we focus on three data structures: the suffix array, the LCP array and the Lempel-Ziv (LZ77) parsing. These are routinely found as components or used as auxiliary data structures in the construction of many modern full-text indexes.
The suffix array is a list of all suffixes of a text in lexicographical order. Despite its simplicity, the suffix array is a powerful tool used extensively not only in indexing but also in data compression, string combinatorics or computational biology. The first contribution of this thesis is an improved algorithm for external memory suffix array construction based on constructing suffix arrays for blocks of text and merging them into the full suffix array.
In many applications, the suffix array needs to be augmented with the information about the longest common prefix between each two adjacent suffixes in lexicographical order. The array containing such information is called the longest-common-prefix (LCP) array. The second contribution of this thesis is the first algorithm for computing the LCP array in external memory that is not an extension of a suffix-sorting algorithm.
When the input text is highly repetitive, the general-purpose text indexes are usually outperformed (particularly in space usage) by specialized indexes. One of the most popular families of such indexes is based on the Lempel-Ziv (LZ77) parsing. LZ77 parsing is the encoding of text that replaces long repeating substrings with references to other occurrences. In addition to indexing, LZ77 is a heavily used tool in data compression. The third contribution of this thesis is a series of new algorithms to compute the LZ77 parsing, both in RAM and in external memory.
The algorithms introduced in this thesis significantly improve upon the prior art. For example: (i) our new approach for constructing the LCP array in external memory is faster than the previously best algorithm by a factor of 2-4 and simultaneously reduces the disk space usage by a factor of four; (ii) a parallel version of our improved suffix array construction algorithm is able to handle inputs much larger than considered in the literature so far. In our experiments, computing the suffix array of a 1 TiB file with the new algorithm took a little over a week and required only 7.2 TiB of disk space (including input and output), whereas on the same machine the previously best algorithm would require 3.5 times as much disk space and take about four times longer.
Tutkielman aiheena olevilla algoritmeilla voidaan tehokkaasti muodostaa perustietorakenteita, joita…
Subjects/Keywords: computer Science; computer Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kempa, D. (2015). Efficient Construction of Fundamental Data Structures in Large-Scale Text Indexing. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/156516
Chicago Manual of Style (16th Edition):
Kempa, Dominik. “Efficient Construction of Fundamental Data Structures in Large-Scale Text Indexing.” 2015. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/156516.
MLA Handbook (7th Edition):
Kempa, Dominik. “Efficient Construction of Fundamental Data Structures in Large-Scale Text Indexing.” 2015. Web. 27 Feb 2021.
Vancouver:
Kempa D. Efficient Construction of Fundamental Data Structures in Large-Scale Text Indexing. [Internet] [Doctoral dissertation]. University of Helsinki; 2015. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/156516.
Council of Science Editors:
Kempa D. Efficient Construction of Fundamental Data Structures in Large-Scale Text Indexing. [Doctoral Dissertation]. University of Helsinki; 2015. Available from: http://hdl.handle.net/10138/156516

University of Helsinki
7.
Wang, Liang.
Content, Topology and Cooperation in In-network Caching.
Degree: Department of Computer Science, 2015, University of Helsinki
URL: http://hdl.handle.net/10138/153545
► In-network caching aims at improving content delivery and alleviating pressures on network bandwidth by leveraging universally networked caches. This thesis studies the design of cooperative…
(more)
▼ In-network caching aims at improving content delivery and alleviating pressures on network bandwidth by leveraging universally networked caches. This thesis studies the design of cooperative in-network caching strategy from three perspectives: content, topology and cooperation, specifically focuses on the mechanisms of content delivery and cooperation policy and their impacts on the performance of cache networks.
The main contributions of this thesis are twofold. From measurement perspective, we show that the conventional metric hit rate is not sufficient in evaluating a caching strategy on non-trivial topologies, therefore we introduce footprint reduction and coupling factor, which contain richer information. We show cooperation policy is the key in balancing various tradeoffs in caching strategy design, and further investigate the performance impact from content per se via different chunking schemes.
From design perspective, we first show different caching heuristics and smart routing schemes can significantly improve the caching performance and facilitate content delivery. We then incorporate well-defined fairness metric into design and derive the unique optimal caching solution on the Pareto boundary with bargaining game framework. In addition, our study on the functional relationship between cooperation overhead and neighborhood size indicates collaboration should be constrained in a small neighborhood due to its cost growing exponentially on general network topologies.
Verkonsisäinen välimuistitallennus pyrkii parantamaan sisällöntoimitusta ja helpottamaan painetta verkon siirtonopeudessa hyödyntämällä universaaleja verkottuneita välimuisteja. Tämä väitöskirja tutkii yhteistoiminnallisen verkonsisäisen välimuistitallennuksen suunnittelua kolmesta näkökulmasta: sisällön, topologian ja yhteistyön kautta, erityisesti keskittyen sisällöntoimituksen mekanismeihin ja yhteistyökäytäntöihin sekä näiden vaikutuksiin välimuistiverkkojen performanssiin.
Väitöskirjan suurimmat aikaansaannokset ovat kahdella saralla. Mittaamisen näkökulmasta näytämme, että perinteinen metrinen välimuistin osumatarkkuus ei ole riittävä ei-triviaalin välimuistitallennusstrategian arvioinnissa, joten esittelemme parempaa informaatiota sisältävät jalanjäljen pienentämisen sekä yhdistämistekijän. Näytämme, että yhteistyökäytäntö on avain erilaisten välimuistitallennusstrategian suunnitteluun liittyvien kompromissien tasapainotukseen ja tutkimme lisää sisällön erilaisten lohkomisjärjestelmien kautta aiheuttamaa vaikutusta performanssiin.
Suunnittelun näkökulmasta näytämme ensin, kuinka erilaiset välimuistitallennuksen heuristiikat ja viisaan reitityksen järjestelmät parantavat merkittävästi välimuistitallennusperformanssia sekä helpottavat sisällön toimitusta. Sisällytämme sitten suunnitteluun hyvin määritellyn oikeudenmukaisuusmittarin ja johdamme uniikin optimaalin välimuistitallennusratkaisun Pareto-rintamalla neuvottelupelin kehyksissä. Lisäksi tutkimuksemme yhteistyökustannusten ja naapurustokoon funktionaalisesta suhteesta…
Subjects/Keywords: computer Science; computer Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, L. (2015). Content, Topology and Cooperation in In-network Caching. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/153545
Chicago Manual of Style (16th Edition):
Wang, Liang. “Content, Topology and Cooperation in In-network Caching.” 2015. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/153545.
MLA Handbook (7th Edition):
Wang, Liang. “Content, Topology and Cooperation in In-network Caching.” 2015. Web. 27 Feb 2021.
Vancouver:
Wang L. Content, Topology and Cooperation in In-network Caching. [Internet] [Doctoral dissertation]. University of Helsinki; 2015. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/153545.
Council of Science Editors:
Wang L. Content, Topology and Cooperation in In-network Caching. [Doctoral Dissertation]. University of Helsinki; 2015. Available from: http://hdl.handle.net/10138/153545

University of Helsinki
8.
Zou, Yuan.
On Model Selection for Bayesian Networks and Sparse Logistic Regression.
Degree: Department of Computer Science, 2017, University of Helsinki
URL: http://hdl.handle.net/10138/174619
► Model selection is one of the fundamental tasks in scientific research. In this thesis, we addresses several research problems in statistical model selection, which aims…
(more)
▼ Model selection is one of the fundamental tasks in scientific research. In this thesis, we addresses several research problems in statistical model selection, which aims to select a statistical model that fits the data best. We focus on the model selection problems in Bayesian networks and logistic regression from both theoretical and practical aspects.
We first compare different model selection criteria for learning Bayesian networks and focus on the Fisher information approximation (FIA) criterion. We describe how FIA fails when the candidate models are complex and there is only limited data available. We show that although the Bayesian information criterion (BIC) is a more coarse than FIA, it achieves better results in most of the cases.
Then, we present a method named Semstem, based on the structural expectation maximization algorithm, for learning stemmatic trees as a special type of Bayesian networks, which model the evolutionary relationships among historical manuscripts. Semstem selects best models by the maximum likelihood criterion, which is equivalent to BIC in this case. We show that Semstem achieves results with usually higher accuracies and better interpretability than other popular methods when applied on two benchmark data sets.
Before we turn to the topic of learning another type of Bayesian networks, we start with a study on how to efficiently learn interactions among variables. To reduce the search space, we apply basis functions on the input variables and transform the original problem into a model selection problem in logistic regression. Then we can use Lasso to select a small set of effective predictors out of a large set of candidates. We show that the Lasso-based method is more robust than an earlier method under different situations.
We extend the Lasso-based method for learning Bayesian networks with local structure, i.e. regularities in conditional probability distributions. We show that our method is more suitable than some classic methods that do not consider local structure. Moreover, when the local structure is complex, our method outperforms two other methods that are also designed for learning local structure.
Mallinvalinta on eräs tieteellisen tutkimuksen perustavanlaatuisista ongelmista. Tässä väitöskirjassa käsittelemme useita tutkimuskysymyksiä liittyen tilastollisen mallinvalintaan, jossa tavoitteena on valita aineistoon parhaiten sopiva tilastollinen malli. Tarkastelemme Bayes-verkkojen ja logistisen regression mallinvalintaongelmia sekä teoreettisesta että soveltavasta näkökulmasta.
Vertaamme ensin eri mallinvalintakriteereitä Bayes-verkkojen oppimiseen ja keskitymme Fisher-informaatioapproksimaatioon (Fisher Information Approximation, FIA) pohjautuvaan kriteeriin. Näytämme, että FIA epäonnistuu mallinvalinnassa kun kandidaattimallit ovat monimutkaisia ja aineiston määrä on rajoitettu. Osoitamme, että vaikka bayesiläinen informaatiokriteeri (Bayesian Information Criterion, BIC) on FIA:ta karkeampi, se tuottaa useimmiten parempia tuloksia.
Seuraavaksi esittelemme…
Subjects/Keywords: Computer Science; Computer Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zou, Y. (2017). On Model Selection for Bayesian Networks and Sparse Logistic Regression. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/174619
Chicago Manual of Style (16th Edition):
Zou, Yuan. “On Model Selection for Bayesian Networks and Sparse Logistic Regression.” 2017. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/174619.
MLA Handbook (7th Edition):
Zou, Yuan. “On Model Selection for Bayesian Networks and Sparse Logistic Regression.” 2017. Web. 27 Feb 2021.
Vancouver:
Zou Y. On Model Selection for Bayesian Networks and Sparse Logistic Regression. [Internet] [Doctoral dissertation]. University of Helsinki; 2017. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/174619.
Council of Science Editors:
Zou Y. On Model Selection for Bayesian Networks and Sparse Logistic Regression. [Doctoral Dissertation]. University of Helsinki; 2017. Available from: http://hdl.handle.net/10138/174619

University of Helsinki
9.
Ding, Yi.
Collaborative Traffic Offloading for Mobile Systems.
Degree: Department of Computer Science, 2015, University of Helsinki
URL: http://hdl.handle.net/10138/157781
► Due to the popularity of smartphones and mobile streaming services, the growth of traffic volume in mobile networks is phenomenal. This leads to huge investment…
(more)
▼ Due to the popularity of smartphones and mobile streaming services, the growth of traffic volume in mobile networks is phenomenal. This leads to huge investment pressure on mobile operators' wireless access and core infrastructure, while the profits do not necessarily grow at the same pace. As a result, it is urgent to find a cost-effective solution that can scale to the ever increasing traffic volume generated by mobile systems. Among many visions, mobile traffic offloading is regarded as a promising mechanism by using complementary wireless communication technologies, such as WiFi, to offload data traffic away from the overloaded mobile networks. The current trend to equip mobile devices with an additional WiFi interface also supports this vision.
This dissertation presents a novel collaborative architecture for mobile traffic offloading that can efficiently utilize the context and resources from networks and end systems. The main contributions include a network-assisted offloading framework, a collaborative system design for energy-aware offloading, and a software-defined networking (SDN) based offloading platform. Our work is the first in this domain to integrate energy and context awareness into mobile traffic offloading from an architectural perspective. We have conducted extensive measurements on mobile systems to identify hidden issues of traffic offloading in the operational networks. We implement the offloading protocol in the Linux kernel and develop our energy-aware offloading framework in C++ and Java on commodity machines and smartphones. Our prototype systems for mobile traffic offloading have been tested in a live environment. The experimental results suggest that our collaborative architecture is feasible and provides reasonable improvement in terms of energy saving and offloading efficiency. We further adopt the programmable paradigm of SDN to enhance the extensibility and deployability of our proposals. We release the SDN-based platform under open-source licenses to encourage future collaboration with research community and standards developing organizations. As one of the pioneering work, our research stresses the importance of collaboration in mobile traffic offloading. The lessons learned from our protocol design, system development, and network experiments shed light on future research and development in this domain.
Yksi mobiiliverkkojen suurimmista haasteista liittyy liikennemäärien eksponentiaaliseen kasvuun. Tämä verkkoliikenteen kasvu johtuu pitkälti suosituista videopalveluista, kuten YouTube ja Netflix, jotka lähettävät liikkuvaa kuvaa verkon yli. Verkon lisääntynyt kuormitus vaatii investointeja verkon laajentamiseksi. On tärkeää löytää kustannustehokkaita tapoja välittää suuressa mittakaavassa sisältöä ilman mittavia infrastruktuuri-investointeja.
Erilaisia liikennekuormien ohjausmenetelmiä on ehdotettu ratkaisuksi sisällönvälityksen tehostamiseen mobiiliverkoissa. Näissä ratkaisuissa hyödynnetään toisiaan tukevia langattomia teknologioita tiedonvälityksen tehostamiseen,…
Subjects/Keywords: Computer Science; Computer Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ding, Y. (2015). Collaborative Traffic Offloading for Mobile Systems. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/157781
Chicago Manual of Style (16th Edition):
Ding, Yi. “Collaborative Traffic Offloading for Mobile Systems.” 2015. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/157781.
MLA Handbook (7th Edition):
Ding, Yi. “Collaborative Traffic Offloading for Mobile Systems.” 2015. Web. 27 Feb 2021.
Vancouver:
Ding Y. Collaborative Traffic Offloading for Mobile Systems. [Internet] [Doctoral dissertation]. University of Helsinki; 2015. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/157781.
Council of Science Editors:
Ding Y. Collaborative Traffic Offloading for Mobile Systems. [Doctoral Dissertation]. University of Helsinki; 2015. Available from: http://hdl.handle.net/10138/157781

University of Helsinki
10.
Fagerholm, Fabian.
Software Developer Experience : Case Studies in Lean-Agile and Open Source Environments.
Degree: Department of Computer Science, 2015, University of Helsinki
URL: http://hdl.handle.net/10138/158080
► Human factors have been identified as having the largest impact on performance and quality in software development. While production methods and tools, such as development…
(more)
▼ Human factors have been identified as having the largest impact on performance and quality in software development. While production methods and tools, such as development processes, methodologies, integrated development environments, and version control systems, play an important role in modern software development, the largest sources of variance and opportunities for improvement can be found in individual and group factors. The success of software development projects is highly dependent on cognitive, conative, affective, and social factors among individuals and groups. When success is considered to include not only fulfilment of schedules and profitability, but also employee well-being and public impact, particular attention must be paid to software developers and their experience of the software development activity.
This thesis uses a mixed-methods research design, with case studies conducted in contemporary software development environments, to develop a theory of software developer experience. The theory explains what software developers experience as part of the development activity, how an experience arises, how the experience leads to changes in software artefacts and the development environment through behaviour, and how the social nature of software development mediates both the experience and outcomes. The theory can be used both to improve software development work environments and to design further scientific studies on developer experience.
In addition, the case studies provide novel insights into how software developers experience software development in contemporary environments. In Lean-Agile software development, developers are found to be engaged in a continual cycle of Performance Alignment Work, where they become aware of, interpret, and adapt to performance concerns on all levels of an organisation. High-performing teams can successfully carry out this cycle and also influence performance expectations in other parts of the organisation and beyond.
The case studies show that values arise as a particular concern for developers. The combination of Lean and Agile software development allows for a great deal of flexibility and self-organisation among developers. As a result, developers themselves must interpret the value system inherent in these methodologies in order to inform everyday decision-making. Discrepancies in the understanding of the value system may lead to different interpretations of what actions are desirable in a particular situation. Improved understanding of values may improve decision-making and understanding of Lean-Agile software development methodologies among software developers. Organisations may wish to clarify the value system for their particular organisational culture and promote values-based leadership for their software development projects.
The distributed nature and use of virtual teams in Open Source environments present particular challenges when new members are to join a project. This thesis examines mentoring as a particular form of onboarding support…
Subjects/Keywords: Computer Science; Computer Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Fagerholm, F. (2015). Software Developer Experience : Case Studies in Lean-Agile and Open Source Environments. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/158080
Chicago Manual of Style (16th Edition):
Fagerholm, Fabian. “Software Developer Experience : Case Studies in Lean-Agile and Open Source Environments.” 2015. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/158080.
MLA Handbook (7th Edition):
Fagerholm, Fabian. “Software Developer Experience : Case Studies in Lean-Agile and Open Source Environments.” 2015. Web. 27 Feb 2021.
Vancouver:
Fagerholm F. Software Developer Experience : Case Studies in Lean-Agile and Open Source Environments. [Internet] [Doctoral dissertation]. University of Helsinki; 2015. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/158080.
Council of Science Editors:
Fagerholm F. Software Developer Experience : Case Studies in Lean-Agile and Open Source Environments. [Doctoral Dissertation]. University of Helsinki; 2015. Available from: http://hdl.handle.net/10138/158080

University of Helsinki
11.
Ahonen, Teppo.
Cover Song Identification Using Compression-based Distance Measures.
Degree: Department of Computer Science, 2016, University of Helsinki
URL: http://hdl.handle.net/10138/160691
► Measuring similarity in music data is a problem with various potential applications. In recent years, the task known as cover song identification has gained widespread…
(more)
▼ Measuring similarity in music data is a problem with various potential applications. In recent years, the task known as cover song identification has gained widespread attention. In cover song identification, the purpose is to determine whether a piece of music is a different rendition of a previous version of the composition. The task is quite trivial for a human listener, but highly challenging for a computer.
This research approaches the problem from an information theoretic starting point. Assuming that cover versions share musical information with the original performance, we strive to measure the degree of this common information as the amount of computational resources needed to turn one version into another. Using a similarity measure known as normalized compression distance, we approximate the non-computable Kolmogorov complexity as the length of an object when compressed using a real-world data compression algorithm. If two pieces of music share musical information, we should be able to compress one using a model learned from the other.
In order to use compression-based similarity measuring, the meaningful musical information needs to be extracted from the raw audio signal data. The most commonly used representation for this task is known as chromagram: a sequence of real-valued vectors describing the temporal tonal content of the piece of music. Measuring the similarity between two chromagrams effectively with a data compression algorithm requires further processing to extract relevant features and find a more suitable discrete representation for them. Here, the challenge is to process the data without losing the distinguishing characteristics of the music.
In this research, we study the difficult nature of cover song identification and search for an effective compression-based system for the task. Harmonic and melodic features, different representations for them, commonly used data compression algorithms, and several other variables of the problem are addressed thoroughly. The research seeks to shed light on how different choices in the scheme attribute to the performance of the system. Additional attention is paid to combining different features, with several combination strategies studied. Extensive empirical evaluation of the identification system has been performed, using large sets of real-world music data.
Evaluations show that the compression-based similarity measuring performs relatively well but fails to achieve the accuracy of the existing solution that measures similarity by using common subsequences. The best compression-based results are obtained by a combination of distances based on two harmonic representations obtained from chromagrams using hidden Markov model chord estimation, and an octave-folded version of the extracted salient melody representation. The most distinct reason for the shortcoming of the compression performance is the scarce amount of data available for a single piece of music. This was partially overcome by internal data duplication. As a whole, the process is…
Subjects/Keywords: Computer Science; Computer Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ahonen, T. (2016). Cover Song Identification Using Compression-based Distance Measures. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/160691
Chicago Manual of Style (16th Edition):
Ahonen, Teppo. “Cover Song Identification Using Compression-based Distance Measures.” 2016. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/160691.
MLA Handbook (7th Edition):
Ahonen, Teppo. “Cover Song Identification Using Compression-based Distance Measures.” 2016. Web. 27 Feb 2021.
Vancouver:
Ahonen T. Cover Song Identification Using Compression-based Distance Measures. [Internet] [Doctoral dissertation]. University of Helsinki; 2016. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/160691.
Council of Science Editors:
Ahonen T. Cover Song Identification Using Compression-based Distance Measures. [Doctoral Dissertation]. University of Helsinki; 2016. Available from: http://hdl.handle.net/10138/160691
12.
Gross, Oskar.
Word Associations as a Language Model for Generative and Creative Tasks.
Degree: Department of Computer Science, 2016, University of Helsinki
URL: http://hdl.handle.net/10138/161251
► In order to analyse natural language and gain a better understanding of documents, a common approach is to produce a language model which creates a…
(more)
▼ In order to analyse natural language and gain a better understanding of documents, a common approach is to produce a language model which creates a structured representation of language which could then be used further for analysis or generation. This thesis will focus on a fairly simple language model which looks at word associations which appear together in the same sentence. We will revisit a classic idea of analysing word co-occurrences statistically and propose a simple parameter-free method for extracting common word associations, i.e. associations between words that are often used in the same context (e.g., Batman and Robin). Additionally we propose a method for extracting associations which are specific to a document or a set of documents. The idea behind the method is to take into account the common word associations and highlight such word associations which co-occur in the document unexpectedly often.
We will empirically show that these models can be used in practice at least for three tasks: generation of creative combinations of related words, document summarization, and creating poetry. First the common word association language model is used for solving tests of creativity – the Remote Associates test. Then observations of the properties of the model are used further to generate creative combinations of words – sets of words which are mutually not related, but do share a common related concept.
Document summarization is a task where a system has to produce a short summary of the text with a limited number of words. In this thesis, we will propose a method which will utilise the document-specific associations and basic graph algorithms to produce summaries which give competitive performance on various languages. Also, the document-specific associations are used in order to produce poetry which is related to a certain document or a set of documents. The idea is to use documents as inspiration for generating poems which could potentially be used as commentary to news stories.
Empirical results indicate that both, the common and the document-specific associations, can be used effectively for different applications. This provides us with a simple language model which could be used for different languages.
Kielimalleja käytetään usein luonnollisten kielten ja dokumenttien ymmärtämiseen. Kielimalli on kielen rakenteellinen esitysmuoto, jota voidaan käyttää kielen analyysiin tai sen tuottamiseen. Tässä työssä esitetään yksinkertainen kielimalli, joka perustuu assosiaatioihin sanojen välillä, jotka esiintyvät samassa lausessa. Ensin tutustumme klassiseen menetelmään analysoida sanojen yhteisesiintymiä tilastollisesti, jonka perusteella esittelemme parametri-vapaan menetelmän tuottaa yleisiä sana-assosiaatioita. Nämä sana-assosiaatiot ovat yhteyksiä sellaisten sanojen välillä, jotka esiintyvät samoissa asiayhteyksissä, kuten esimerkiksi Batman ja Robin. Lisäksi esittelemme menetelmän, joka tuottaa näitä assosiaatioita tietylle dokumentille tai joukolle dokumentteja. Menetelmä perustuu niiden…
Subjects/Keywords: Computer Science; Computer Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gross, O. (2016). Word Associations as a Language Model for Generative and Creative Tasks. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/161251
Chicago Manual of Style (16th Edition):
Gross, Oskar. “Word Associations as a Language Model for Generative and Creative Tasks.” 2016. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/161251.
MLA Handbook (7th Edition):
Gross, Oskar. “Word Associations as a Language Model for Generative and Creative Tasks.” 2016. Web. 27 Feb 2021.
Vancouver:
Gross O. Word Associations as a Language Model for Generative and Creative Tasks. [Internet] [Doctoral dissertation]. University of Helsinki; 2016. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/161251.
Council of Science Editors:
Gross O. Word Associations as a Language Model for Generative and Creative Tasks. [Doctoral Dissertation]. University of Helsinki; 2016. Available from: http://hdl.handle.net/10138/161251

University of Helsinki
13.
Athukorala, Kumaripaba.
Information Search as Adaptive Interaction.
Degree: Department of Computer Science; Helsinki Institute for Information Technology HIIT, 2016, University of Helsinki
URL: http://hdl.handle.net/10138/167284
► We use information retrieval (IR) systems to meet a broad range of information needs, from simple ones involving day-to-day decisions to complex and imprecise information…
(more)
▼ We use information retrieval (IR) systems to meet a broad range of information needs, from simple ones involving day-to-day decisions to complex and imprecise information needs that cannot be easily formulated as a question. In consideration of these diverse goals, search activities are commonly divided into two broad categories: lookup and exploratory. Lookup searches begin with precise search goals and end soon after reaching of the target, while exploratory searches center on learning or investigation activities with imprecise search goals. Although exploration is a prominent life activity, it is naturally challenging for users because they lack domain knowledge; at the same time, information needs are broad, complex, and subject to constant change. It is also rather difficult for IR systems to offer support for exploratory searches, not least because of the complex information needs and dynamic nature of the user. It is hard also to conceptualize exploration distinctly. In consequence, most of the popular IR systems are targeted at lookup searches only. There is a clear need for better IR systems that support a wide range of search activities.
The primary objective for this thesis is to enable the design of IR systems that support exploratory and lookup searches equally well. I approached this problem by modeling information search as a rational adaptation of interactions, which aids in clear conceptualization of exploratory and lookup searches. In work building on an existing framework for examination of adaptive interaction, it is assumed that three main factors influence how we interact with search systems: the ecological structure of the environment, our cognitive and perceptual limits, and the goal of optimizing the tradeoff between information gain and time cost. This thesis contributes three models developed in research proceeding from this adaptive interaction framework, to 1) predict evolving information needs in exploratory searches, 2) distinguish between exploratory and lookup tasks, and 3) predict the emergence of adaptive search strategies. It concludes with development of an approach that integrates the proposed models for the design of an IR system that provides adaptive support for both exploratory and lookup searches.
The findings confirm the ability to model information search as adaptive interaction. The models developed in the thesis project have been empirically validated through user studies, with an adaptive search system that emphasizes the practical implications of the models for supporting several types of searches. The studies conducted with the adaptive search system further confirm that IR systems could improve information search performance by dynamically adapting to the task type. The thesis contributes an approach that could prove fruitful for future IR systems in efforts to offer more efficient and less challenging search experiences.
Tiedonhakujärjestelmiä käytetään monenlaisiin tarkoituksiin, niin vastausten hakemiseen yksinkertaisiin arkipäivän kysymyksiin kuin…
Subjects/Keywords: computer science; computer science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Athukorala, K. (2016). Information Search as Adaptive Interaction. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/167284
Chicago Manual of Style (16th Edition):
Athukorala, Kumaripaba. “Information Search as Adaptive Interaction.” 2016. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/167284.
MLA Handbook (7th Edition):
Athukorala, Kumaripaba. “Information Search as Adaptive Interaction.” 2016. Web. 27 Feb 2021.
Vancouver:
Athukorala K. Information Search as Adaptive Interaction. [Internet] [Doctoral dissertation]. University of Helsinki; 2016. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/167284.
Council of Science Editors:
Athukorala K. Information Search as Adaptive Interaction. [Doctoral Dissertation]. University of Helsinki; 2016. Available from: http://hdl.handle.net/10138/167284

University of Helsinki
14.
Zhao, Kai.
Understanding Urban Human Mobility for Network Applications.
Degree: Department of Computer Science, 2015, University of Helsinki
URL: http://hdl.handle.net/10138/157592
► Understanding urban human mobility is crucial for various mobile and network applications. This thesis addresses two key challenges presented by mobile applications, namely urban mobility…
(more)
▼ Understanding urban human mobility is crucial for various mobile and network applications. This thesis addresses two key challenges presented by mobile applications, namely urban mobility modeling and its applications in Delay Tolerant Networks (DTNs).
First, we model urban human mobility with transportation mode information. Our research is based on two real-life GPS datasets containing approximately 20 and 10 million GPS samples. Previous research has suggested that the trajectories in human mobility have statistically similar features as Lévy Walks. We attempt to explain the Lévy Walks behavior by decomposing them into different classes according to the different transportation modes, such as Walk/Run, Bike, Train/ Subway or Car/Taxi/Bus. We show that human mobility can be modelled as a mixture of different transportation modes, and that these single movement patterns can be approximated by a lognormal distribution rather than a power-law distribution. Then, we demonstrate that the mixture of the decomposed lognormal flight distributions associated with each modality is a power-law distribution, providing an explanation for the emergence of Lévy Walks patterns that characterize human mobility patterns.
Second, we find that urban human mobility exhibits strong spatial and temporal patterns. We leverage such human mobility patterns to derive an optimal routing algorithm that minimizes the hop count while maximizing the number of needed nodes in DTNs. We propose a solution framework, called Ameba, for timely data delivery in DTNs. Simulation results with experimental traces indicate that Ameba achieves a comparable delivery ratio to a Flooding-based algorithm, but with much lower overhead.
Third, we infer the functions of the sub-areas in three cities by analyzing urban mobility patterns. The analysis is based on three large taxi GPS datasets in Rome, San Francisco and Beijing containing 21, 11 and 17 million GPS points, respectively. We categorize the city regions into four categories, workplaces, entertainment places, residential places and other places. We show that the identification of these functional sub-areas can be utilized to increase the efficiency of urban DTN applications.
The three topics pertaining to urban mobility examined in the thesis support the design and implementation of network applications for urban environments.
Ihmisen liikkumisen ymmärtäminen on erittäin tärkeää monille mobiiliverkkojen sovelluksille. Tämä väitöskirja käsittelee mobiilikäyttäjien liikkuvuuden mallintamista ja sen soveltamista viiveitä sietävään tiedonvälitykseen urbaanissa ympäristössä.
Aloitamme mallintamalla mobiilikäyttäjien liikkuvuutta ottaen huomioon kulkumuodon. Tutkimuksemme perustuu kahteen laajaan GPS-data-aineistoon. Käytetyissä data-aineisto koostuu 10 ja 20 miljoonan havaintopisteen kulkuvälineet sisältävistä GPS-tiedoista. Aikaisemmat tutkimukset ovat ehdottaneet, että liikkuvuusmalleilla on samankaltaisia tilastollisia ominaisuuksia kuin Lévy-kävelyillä. Tutkimuksemme selittää Lévy-kävelyiden…
Subjects/Keywords: computer Science; computer Science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhao, K. (2015). Understanding Urban Human Mobility for Network Applications. (Doctoral Dissertation). University of Helsinki. Retrieved from http://hdl.handle.net/10138/157592
Chicago Manual of Style (16th Edition):
Zhao, Kai. “Understanding Urban Human Mobility for Network Applications.” 2015. Doctoral Dissertation, University of Helsinki. Accessed February 27, 2021.
http://hdl.handle.net/10138/157592.
MLA Handbook (7th Edition):
Zhao, Kai. “Understanding Urban Human Mobility for Network Applications.” 2015. Web. 27 Feb 2021.
Vancouver:
Zhao K. Understanding Urban Human Mobility for Network Applications. [Internet] [Doctoral dissertation]. University of Helsinki; 2015. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/10138/157592.
Council of Science Editors:
Zhao K. Understanding Urban Human Mobility for Network Applications. [Doctoral Dissertation]. University of Helsinki; 2015. Available from: http://hdl.handle.net/10138/157592

Texas A&M University
15.
Kanthadai, Sundarrajan S.
Recoverable distributed shared memory.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-K35
► Distributed Shared Memory (DSM) is a model for interprocess communication, implemented on top of message passing systems. In this model, processes running on separate hosts…
(more)
▼ Distributed Shared Memory (DSM) is a model for interprocess communication, implemented on top of message passing systems. In this model, processes running on separate hosts can access a shared, coherent memory address space, provided by the underlying DSM system, through the normal read and write operations. Thus, by avoiding the programming complexities of message passing, it has become a convenient model to work with. It is a natural extension of parallel programming on uniprocessors to distributed environments, As the number of processors in the system and the running time of applications executing on such a system increases, the likelihood of processor failure due to machine malfunction, power failure, user error, etc., increases. The benefits given by these systems can possibly be achieved only if the whole system behaves like a failure-free system. Many algorithms that have been proposed for implementing a reliable DSM, require the processes to take checkpoints whenever there is a data transfer, thus resulting in high overhead during failure-free execution. We propose a new recoverable DSM algorithm to tolerate multiple node failures and where the checkpointing interval can be tailored to balance the cost of checkpointing versus the savings in recovery obtained by taking checkpoints often. The technique uses independent checkpointing and keeps track of the dependencies by logging writes and some additional information about the occurrence of reads. Unlike previous recovery techniques, this one reduces both the message and the logging overheads.
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kanthadai, S. S. (2012). Recoverable distributed shared memory. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-K35
Chicago Manual of Style (16th Edition):
Kanthadai, Sundarrajan S. “Recoverable distributed shared memory.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-K35.
MLA Handbook (7th Edition):
Kanthadai, Sundarrajan S. “Recoverable distributed shared memory.” 2012. Web. 27 Feb 2021.
Vancouver:
Kanthadai SS. Recoverable distributed shared memory. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-K35.
Council of Science Editors:
Kanthadai SS. Recoverable distributed shared memory. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-K35

Texas A&M University
16.
Lu, Wenhu.
Design and implementation of the DDA network.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-L84
► The focus of this thesis is a network/session level software construct, called a distributed data agent (DDA) network, to support interconnection management of information systems…
(more)
▼ The focus of this thesis is a network/session level software construct, called a distributed data agent (DDA) network, to support interconnection management of information systems needing to handle dynamic mission scenarios. Networking transparency, real-time responsiveness and fault tolerance are the primary design goals of the DDA. Based on the client-server model, DDA processes can be dynamically generated, terminated, and migrated in response to changes in mission scenarios, including system failures. The DDA is based on a multithreaded implementation, in which the service priority of each data channel can be dynamically adjusted to support soft real-time data communications in the Internet environment. The well known ratemonotonic scheduling (RMS) scheme is adopted for service of synchronous (periodic) and aperiodic data streams. When the data needs to be distributed to numerous users, the DDA can be easily replicated to avoid performance bottlenecks. A DDA prototype has been implemented for real-time traffic data distribution, and it is successfully used for rapid prototyping of real-time transportation network optimization.
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lu, W. (2012). Design and implementation of the DDA network. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-L84
Chicago Manual of Style (16th Edition):
Lu, Wenhu. “Design and implementation of the DDA network.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-L84.
MLA Handbook (7th Edition):
Lu, Wenhu. “Design and implementation of the DDA network.” 2012. Web. 27 Feb 2021.
Vancouver:
Lu W. Design and implementation of the DDA network. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-L84.
Council of Science Editors:
Lu W. Design and implementation of the DDA network. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-L84

Texas A&M University
17.
Mishra, Amitabh.
Task and instruction scheduling in parallel multithreaded processors.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-M57
► Parallel muitithreading is a technique to execute parallel programs on a multithreaded superscalar processor. It enhances instruction throughput in a processor by combining program parallelism…
(more)
▼ Parallel muitithreading is a technique to execute parallel programs on a multithreaded superscalar processor. It enhances instruction throughput in a processor by combining program parallelism with the strong features of superscalar and multithreaded architectures: the multiple-instruction-issue ability of a superscalar processor, and the latency-hiding feature of multithreaded architectures. In such processors, several threads issue instructions to a superscalar processors multiple functional units every cycle. A new prioritizatioin, technique based on a critical path based analysis of program graphs is shown to enhance instruction throughput significantly by scheduling the best threads (or tasks) from a parallel Program on to the processor Pipelines, and by issuing the best instructions from the fetched threads to the multiple functional units of the processor. Simulation-based comparisons show that our task and instruction scheduling techniques Yield an instruction throughput UP to 20% better than previously proposed prioritization techniques that employ heuristics based on considerations other than a critical path analysis of the program graph. Our simulations employ Parallel applications chosen carefully to reflect diverse program behavior. Our results suggest the use Of superscalar multithreaded processors in order to Perform efficient Parallel processing.
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mishra, A. (2012). Task and instruction scheduling in parallel multithreaded processors. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-M57
Chicago Manual of Style (16th Edition):
Mishra, Amitabh. “Task and instruction scheduling in parallel multithreaded processors.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-M57.
MLA Handbook (7th Edition):
Mishra, Amitabh. “Task and instruction scheduling in parallel multithreaded processors.” 2012. Web. 27 Feb 2021.
Vancouver:
Mishra A. Task and instruction scheduling in parallel multithreaded processors. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-M57.
Council of Science Editors:
Mishra A. Task and instruction scheduling in parallel multithreaded processors. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-M57

Texas A&M University
18.
Patterson, Justin William.
A relevance of documentation metric.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-P384
► For years, researchers have been striving to find ways of accurately determining the maintainability of software systems. The topic has been approached in numerous ways,…
(more)
▼ For years, researchers have been striving to find ways of accurately determining the maintainability of software systems. The topic has been approached in numerous ways, most often based upon determining the complexity of the system. The maintainability of a system is not a product of the program's complexity alone but is greatly affected by the ease of comprehension of the system. This is, in turn, heavily influenced by the presentation of the system and the quality of the documentation that is provided. This research is based upon a new approach to the problem of determining maintainability: that of examining the contents of the documentation and the source code with respect to each other. An initial relevance of documentation metric was implemented using pattern-matchilng techniques. The target systems for this metric were programs written using Knuth's literate programming paradigm, in which documentation and source code are tightly coupled. Problems and issues arising from this initial implementation suggest areas for future research, including the use of natural language understanding techniques combined with an analysis of the semantic structure of the code.
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Patterson, J. W. (2012). A relevance of documentation metric. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-P384
Chicago Manual of Style (16th Edition):
Patterson, Justin William. “A relevance of documentation metric.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-P384.
MLA Handbook (7th Edition):
Patterson, Justin William. “A relevance of documentation metric.” 2012. Web. 27 Feb 2021.
Vancouver:
Patterson JW. A relevance of documentation metric. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-P384.
Council of Science Editors:
Patterson JW. A relevance of documentation metric. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-P384

Texas A&M University
19.
Paul, Debjyoti.
Logic verification using recursive learning, ATPG and transformations.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-P39
► We describe a new approach for formal verification of combinational logic circuits using Recursive learning, ATPG and Transformations. A logic verification tool, VeriLAT, was implemented…
(more)
▼ We describe a new approach for formal verification of combinational logic circuits using Recursive learning, ATPG and Transformations. A logic verification tool, VeriLAT, was implemented based on this approach. Experimental results on the ISCAS85 benchmark circuits have been included. The approach is shown to be fast and robust for a wide variety of circuits. It has been shown to be especially suited for verifying dissimilar circuits which cause a lot of problems to other approaches.
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Paul, D. (2012). Logic verification using recursive learning, ATPG and transformations. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-P39
Chicago Manual of Style (16th Edition):
Paul, Debjyoti. “Logic verification using recursive learning, ATPG and transformations.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-P39.
MLA Handbook (7th Edition):
Paul, Debjyoti. “Logic verification using recursive learning, ATPG and transformations.” 2012. Web. 27 Feb 2021.
Vancouver:
Paul D. Logic verification using recursive learning, ATPG and transformations. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-P39.
Council of Science Editors:
Paul D. Logic verification using recursive learning, ATPG and transformations. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-P39

Texas A&M University
20.
Shah, Kuntal.
Evaluation of non-cubic processor allocation in hybercubes.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S52
► The main objective of processor allocation and task mapping in Hypercube multicomputers is to maximize utilization by reducing external and internal fragmentation and to reduce…
(more)
▼ The main objective of processor allocation and task mapping in Hypercube multicomputers is to maximize utilization by reducing external and internal fragmentation and to reduce the communication time for messages being exchanged between the processors. Whereas, cubic and contiguous allocation of processors in Hypercube has been well studied, non-cubic and non-contiguous allocations have been largely neglected because of potential message-contention and larger number of hops between source and destination nodes of a message. With current communication techniques, like wormhole routing, delay due to the number of hops between processors is known to be negligible. The goal of this thesis is to evaluate non-contiguous cubic and random allocation and task mapping in Hypercube computers. The degradation in performance due to message-contention and larger average number of hops for messages because of non-contiguity is studied. Emulation experiments are conducted with a synthetic application as well as some actual applications loaded in non-contiguous cubic and random fashion on a 32 node subcube of a Ncube system. The communication time for these applications are measured under varying amount of system load and with varying message sizes. The effects of link contention due to intertask and intratask message interference on the communication time of these applications are observed. It is shown that the increase in communication time of message-passing applications on Ncube due to non-contiguous cubic and random allocation is much smaller compared to a drastic improvement expected in system utilization and simplified task mapping in Hypercube systems. Hence, non-contiguous cubic and random processor allocation and topology independent task assignment can be used in Hypercubes.
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shah, K. (2012). Evaluation of non-cubic processor allocation in hybercubes. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S52
Chicago Manual of Style (16th Edition):
Shah, Kuntal. “Evaluation of non-cubic processor allocation in hybercubes.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S52.
MLA Handbook (7th Edition):
Shah, Kuntal. “Evaluation of non-cubic processor allocation in hybercubes.” 2012. Web. 27 Feb 2021.
Vancouver:
Shah K. Evaluation of non-cubic processor allocation in hybercubes. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S52.
Council of Science Editors:
Shah K. Evaluation of non-cubic processor allocation in hybercubes. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S52

Texas A&M University
21.
Shao, Li.
Logic and transistor circuit verification using regression testing and hierarchical recursive learning.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S53
► We describe a new approach for formal verification of combinational logic circuits, and their switch-level transistor implementation. Our approach CODibines regression testing, hierarchical recursive learning,…
(more)
▼ We describe a new approach for formal verification of combinational logic circuits, and their switch-level transistor implementation. Our approach CODibines regression testing, hierarchical recursive learning, and test generation techniques. A prototype of a verification tool, Verifast, was developed based on this approach. We demonstrate Verifast on optimized and technology-mapped versions of the ISCASS5 benchmarks. Unlike most other approaches, Verifast can handle all of those circuits, and are almost always faster.
Subjects/Keywords: computer science; Major computer science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shao, L. (2012). Logic and transistor circuit verification using regression testing and hierarchical recursive learning. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S53
Chicago Manual of Style (16th Edition):
Shao, Li. “Logic and transistor circuit verification using regression testing and hierarchical recursive learning.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S53.
MLA Handbook (7th Edition):
Shao, Li. “Logic and transistor circuit verification using regression testing and hierarchical recursive learning.” 2012. Web. 27 Feb 2021.
Vancouver:
Shao L. Logic and transistor circuit verification using regression testing and hierarchical recursive learning. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S53.
Council of Science Editors:
Shao L. Logic and transistor circuit verification using regression testing and hierarchical recursive learning. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S53

Texas A&M University
22.
Son, Wookho.
Dexterous manipulation planning for a planar whole-arm manipulation system.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S66
► Dexterous manipulation planning can be defined as the process of determining the control trajectories both in terms of joint displacements and efforts for an articulated…
(more)
▼ Dexterous manipulation planning can be defined as the process of determining the control trajectories both in terms of joint displacements and efforts for an articulated mechanical hand, in such a way that, if executed, the hands can reconfigure the grasp into one which is known to be more desirable than the initial one. In this thesis, we implement a dexterous manipulation planning system for a generalized manipulation system in the planar case under a quasi-static mechanical model. The main contribution of this thesis is the application of classical cell-decomposition methods, coupled with joint-control mode partitioning to do the quasistatic motion prediction at each step of movement for both sliding and rolling contact when friction is involved or not, to dexterous manipulation planning. In our cell-decomposition method applied to the dexterous,planning, the system's C-space is decomposed into stability cells by using both forward and back projection methods in order to construct an interconnection of stability cells of various dimensions, from which a successful plan can be extracted by navigating through them using stability criteria. Results of the dexterous manipulation planning are given for a two-finger manipulation system as a case study for both frictionless and frictional cases by considering sliding and rolling contact as the possible . contact mode.
Subjects/Keywords: computer science; Major computer science
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Son, W. (2012). Dexterous manipulation planning for a planar whole-arm manipulation system. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S66
Chicago Manual of Style (16th Edition):
Son, Wookho. “Dexterous manipulation planning for a planar whole-arm manipulation system.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S66.
MLA Handbook (7th Edition):
Son, Wookho. “Dexterous manipulation planning for a planar whole-arm manipulation system.” 2012. Web. 27 Feb 2021.
Vancouver:
Son W. Dexterous manipulation planning for a planar whole-arm manipulation system. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S66.
Council of Science Editors:
Son W. Dexterous manipulation planning for a planar whole-arm manipulation system. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-S66

Texas A&M University
23.
Wheatley, Philip Stephen.
Expansion of the internet protocol address space with "minor" disruption of current hardware or software.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-W53
► Currently, the Internet suite of protocols uses a 32 bit network layer address and requires that each machine have a unique address. The problem: 32…
(more)
▼ Currently, the Internet suite of protocols uses a 32 bit network layer address and requires that each machine have a unique address. The problem: 32 bits only distinguishes 2 32 or 4,294,967,296 machines. Even with four billion addresses, experts predict running out of addresses within a few years. There are several ways to solve this problem. The two most obvious ones are either to split the current Internet, or switch to a different addressing method. Splitting the Internet means the parts cannot directly talk to each other. Another addressing method means rewriting current software on 4,294,967,296 machines, some of whose manufacturers are now out-of-business or unable/unwilling to rewrite, especially for free, their networking software. It is therefore essential to have an intermediary protocol that works without modifying current machines, and allows an arbitrary address length. This protocol would allow the current Internet machines to talk to machines using another, longer, method of addressing. This paper describes a protocol that increases the number of addresses without disrupting lnternet's current addressing system
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wheatley, P. S. (2012). Expansion of the internet protocol address space with "minor" disruption of current hardware or software. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-W53
Chicago Manual of Style (16th Edition):
Wheatley, Philip Stephen. “Expansion of the internet protocol address space with "minor" disruption of current hardware or software.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-W53.
MLA Handbook (7th Edition):
Wheatley, Philip Stephen. “Expansion of the internet protocol address space with "minor" disruption of current hardware or software.” 2012. Web. 27 Feb 2021.
Vancouver:
Wheatley PS. Expansion of the internet protocol address space with "minor" disruption of current hardware or software. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-W53.
Council of Science Editors:
Wheatley PS. Expansion of the internet protocol address space with "minor" disruption of current hardware or software. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1996-THESIS-W53

Texas A&M University
24.
Coquelin, Valerie.
Digital redaction.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1997-THESIS-C675
► Archivists are faced with the challenge of ensuring access, preservation, security and storage of a rapidly growing quantity of records. The life span of records…
(more)
▼ Archivists are faced with the challenge of ensuring access, preservation, security and storage of a rapidly growing quantity of records. The life span of records may be reduced by continuous and frequent manipulation, yet these same records might be of high interest to many scholars or journalists. These documents may also contain confidential information. One solution to these problems is to create digital proxies of these original documents. Opportunities for manipulation and access of digital documents are increasing at a steady pace. Based on an extensive survey of this field and with the specific needs of archivists in mind, this thesis presents an architecture for a digital system for archiving digital reproductions. The issues of storage, retrieval and networked environment are presented. manipulation in. a collaborative fashion in a networked environment are presented. Digital redaction, which constitutes the main theme of the thesis, is further elabo rated. Digital redaction methods and commonly used techniques employed in the editing of documents are presented. Digital redaction tools that allow users to edit textual image documents have great potential in the context of digital archives. This thesis describes CIRT, a Collaborative Image Redacting Tool designed to provide specific manipulation and editing of electronic documents by archivists. The prototypical implementation of this tool is used as a method of investigation of research issues in digital redaction of archival material by archivists. CIRT is composed of an individual editing tool, synchronous and asynchronous collaborative editing tools, a document viewer and a document access manager.
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Coquelin, V. (2012). Digital redaction. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1997-THESIS-C675
Chicago Manual of Style (16th Edition):
Coquelin, Valerie. “Digital redaction.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1997-THESIS-C675.
MLA Handbook (7th Edition):
Coquelin, Valerie. “Digital redaction.” 2012. Web. 27 Feb 2021.
Vancouver:
Coquelin V. Digital redaction. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1997-THESIS-C675.
Council of Science Editors:
Coquelin V. Digital redaction. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1997-THESIS-C675

Texas A&M University
25.
Hameed, Sohail.
Scheduling information broadcast in asymmetric environment.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1997-THESIS-H36
► With the increasing popularity of portable wireless computers, mechanisms to efficiently transmit information to such clients are of significant interest. The environment under consideration is…
(more)
▼ With the increasing popularity of portable wireless computers, mechanisms to efficiently transmit information to such clients are of significant interest. The environment under consideration is asymmetric in that the information server has relatively larger bandwidth to communicate to clients than the clients have to communicate to the server. Many applications suit to such environment, some even do not require the clients to send any data (requests) to server. Experience has shown that in such environments, the server should broadcast the information periodically. Many researchers have shown their interest in this area and their nature of work covers a broad horizon. The key to efficient broadcasting lies very much on the way the schedule of broadcasting information is prepared by the server. Many researchers have proposed different schemes but too few talk about optimality of information scheduling. Also most of them assume the medium to be perfect and do not take the transmission errors into account. Besides, an interesting variation of the problem would be to broadcast over multiple channels which has not been given much attention. This thesis analyzes the solutions to these problems. It also gives a couple of algorithms and evaluates their performance using simulation. It also compares the simulation results with the lower bounds obtained by analysis of these problems.
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hameed, S. (2012). Scheduling information broadcast in asymmetric environment. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1997-THESIS-H36
Chicago Manual of Style (16th Edition):
Hameed, Sohail. “Scheduling information broadcast in asymmetric environment.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1997-THESIS-H36.
MLA Handbook (7th Edition):
Hameed, Sohail. “Scheduling information broadcast in asymmetric environment.” 2012. Web. 27 Feb 2021.
Vancouver:
Hameed S. Scheduling information broadcast in asymmetric environment. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1997-THESIS-H36.
Council of Science Editors:
Hameed S. Scheduling information broadcast in asymmetric environment. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1997-THESIS-H36

Texas A&M University
26.
Dillon, Geoffrey A.
Dynamic, transparent Internet server replication using HYDRANET.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-D55
► The exponential growth of the use of the Internet has hics. caused increasing stress on the networking infrastructure. Routers, servers and protocols are reaching their…
(more)
▼ The exponential growth of the use of the Internet has hics. caused increasing stress on the networking infrastructure. Routers, servers and protocols are reaching their limits and need room to scale up their capacities to meet user demands. Client-transparent techniques are needed to make servers scalable on demand. We have developed an enhancement to the TCP/IP infrastructure, called HYDRANET, which will enable servers to dynamically instantiate active agents at selected hosts (Replication Servers) in the Internet. These agents will replicate the transport-level service access points of their origin servers and begin serving clients in the name of the origin server. A new protocol and application API was also developed that manages this service replication scheme by allowing servers to connect to a Replication Server and instantiate a replica. We have applied this new method to a simple HTTP server to demonstrate push- based Web caching. Performance measurements have shown that the scheme does not incur severe penalties to access latency when deployed and actually improves local performance.
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Dillon, G. A. (2012). Dynamic, transparent Internet server replication using HYDRANET. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-D55
Chicago Manual of Style (16th Edition):
Dillon, Geoffrey A. “Dynamic, transparent Internet server replication using HYDRANET.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-D55.
MLA Handbook (7th Edition):
Dillon, Geoffrey A. “Dynamic, transparent Internet server replication using HYDRANET.” 2012. Web. 27 Feb 2021.
Vancouver:
Dillon GA. Dynamic, transparent Internet server replication using HYDRANET. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-D55.
Council of Science Editors:
Dillon GA. Dynamic, transparent Internet server replication using HYDRANET. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-D55

Texas A&M University
27.
Francisco-Revilla, Luis.
User & situation models for medical information delivery.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-F73
► The human exploration of Mars today represents a ographics. challenging and interesting venture for humanity, right at the cutting edge of attainability, both technologically and…
(more)
▼ The human exploration of Mars today represents a ographics. challenging and interesting venture for humanity, right at the cutting edge of attainability, both technologically and socially. It is unique in the sense that for the first time in history, humans are leaving the Earth with a very limited return capability. Also, communication with Earth is limited, with a small communication window and significant time lag. Due to these and other intrinsic characteristics of the enterprise, the crew must be able to resolve any possible incident with little or no help from Earth. In a medical emergency, it is impossible for an Earth-based medical team to provide appropriate remote support. Therefore any support requiring real-time use of a knowledge-base, must be provided locally. The present work explores the design of a Medical Assistant System for such conditions. It investigates the issues regarding the kind of interaction, the system's degree of intrusion, the system's context sensitivity, and the support level that the medical assistant must provide. The purpose of this thesis is to identify design tradeoffs based on scenarios of use. The design of the system contemplates the situated action approach, the cooperative problem solving approach, the utilization of user and situation models, the separation of the presentation from the information itself, and the system's role of an assistant. The system architecture is based on a client-server model using Object Oriented Programming. The system is divided into modules providing a very flexible environment for interoperability and interchange of software components, whether working in a distributed or local environment. Scenarios of use, consisting of walkthroughs through the supported tasks, are utilized to present the dynamics of the system, interaction between user and system, and variation of behavior according to the situation and user characteristics. This prototype has helped to explore the design space, allowing the identification of system limitations, tradeoffs, new requirements, technological issues, implications for other areas, and future research implementation.
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Francisco-Revilla, L. (2012). User & situation models for medical information delivery. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-F73
Chicago Manual of Style (16th Edition):
Francisco-Revilla, Luis. “User & situation models for medical information delivery.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-F73.
MLA Handbook (7th Edition):
Francisco-Revilla, Luis. “User & situation models for medical information delivery.” 2012. Web. 27 Feb 2021.
Vancouver:
Francisco-Revilla L. User & situation models for medical information delivery. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-F73.
Council of Science Editors:
Francisco-Revilla L. User & situation models for medical information delivery. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-F73

Texas A&M University
28.
Gopalakrishnan, Dhilip.
Geographic multicasting in wireless ad-hoc networks.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-G668
► This thesis proposes a new protocol for providing a location based massaging framework in wireless ad-hoc networks. In such a scheme, the destination for a…
(more)
▼ This thesis proposes a new protocol for providing a location based massaging framework in wireless ad-hoc networks. In such a scheme, the destination for a message would be the set of all hosts which lie in a particular geographic domain defined by a set of geographic so-ordinates. Some of the scenarios where the scheme could be used include sending flood warning messages to people along the balks of a river, communicating to a group near a landmine in a search operation and so on, where the objective is to send the message to all hosts in the geographic domain irrespective of who or how many of them exist. The proposed protocol comprises two basic components. A routing mechanism to provide routes from the designated sender of messages to all other hosts in the ad-hoc network and a multicast protocol which uses this basic routing capability to efficiently send messages to all the hosts in the geographic domain. The thesis studies the results obtained from simulation runs of the protocol with respect to the accuracy and efficiency. The accuracy metric gives an indication of the number of hosts that receive the message as opposed to the number of hosts that ideally should have. The efficiency metric gives an indication of the average number of messages received by each host for every multicast message sent. The protocol discussed in the thesis assumes that all the hosts in the network have complete knowledge of the existence of all the registered geographic multicast domains. The results are compared with another protocol where the hosts do not have that information.
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gopalakrishnan, D. (2012). Geographic multicasting in wireless ad-hoc networks. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-G668
Chicago Manual of Style (16th Edition):
Gopalakrishnan, Dhilip. “Geographic multicasting in wireless ad-hoc networks.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-G668.
MLA Handbook (7th Edition):
Gopalakrishnan, Dhilip. “Geographic multicasting in wireless ad-hoc networks.” 2012. Web. 27 Feb 2021.
Vancouver:
Gopalakrishnan D. Geographic multicasting in wireless ad-hoc networks. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-G668.
Council of Science Editors:
Gopalakrishnan D. Geographic multicasting in wireless ad-hoc networks. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-G668

Texas A&M University
29.
Green, Jeremy Donald.
Enhancement of the Texas A&M University Autonomous Underwater Vehicle Controller through development of a middle level classical path planner.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-G74
► The Texas A&M University Autonomous Underwater Vehicle Controller (AUVC) is a combination of software and hardware that provides real-time, fault-tolerant control of an unmanned, untethered…
(more)
▼ The Texas A&M University Autonomous Underwater Vehicle Controller (AUVC) is a combination of software and hardware that provides real-time, fault-tolerant control of an unmanned, untethered submersible. The collision avoidance controller (CAC) is the reactive path planning component of the AUVC which performs real-time obstacle detection, tracking and avoidance from raw sonar data. This module relies on a merit function based on the concept of potential fields for path planning. The problem of false minima is addressed through the use of a visit count which effectively builds up the potential basin of a false minimum into a repulsive mound. The product of this research is a higher level classical path planner designed to assist the merit function in complicated environments which require backtracking or directed search for a valid path. The specific safe travel requirements of the prototype AUV allow reduction of the path planning problem for the AUV to that of path planning for a point robot. An octree model of the environment is constructed from the sonar data processed by the CAC and an A* search of this octree produces a list of subgoals used to modify the avoidance behavior of the merit function. This higher level path planner is resolution complete in static environments. This fact and the current near real-time, correct performance of the octree modeler and planner affirm the validity of this approach.
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Green, J. D. (2012). Enhancement of the Texas A&M University Autonomous Underwater Vehicle Controller through development of a middle level classical path planner. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-G74
Chicago Manual of Style (16th Edition):
Green, Jeremy Donald. “Enhancement of the Texas A&M University Autonomous Underwater Vehicle Controller through development of a middle level classical path planner.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-G74.
MLA Handbook (7th Edition):
Green, Jeremy Donald. “Enhancement of the Texas A&M University Autonomous Underwater Vehicle Controller through development of a middle level classical path planner.” 2012. Web. 27 Feb 2021.
Vancouver:
Green JD. Enhancement of the Texas A&M University Autonomous Underwater Vehicle Controller through development of a middle level classical path planner. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-G74.
Council of Science Editors:
Green JD. Enhancement of the Texas A&M University Autonomous Underwater Vehicle Controller through development of a middle level classical path planner. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-1998-THESIS-G74

Texas A&M University
30.
Collins, Tamara Lyn.
An efficient public key infrastructure revocation mechanism.
Degree: MS, computer science, 2012, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2000-THESIS-C638
► Public key infrastructures are getting more attention these days. They provide a method from which users can obtain public key information for other users. While,…
(more)
▼ Public key infrastructures are getting more attention these days. They provide a method from which users can obtain public key information for other users. While, in theory, this may seem somewhat trivial, there are many details that must be taken into consideration, such as certificate revocation mechanisms. Since certificates will not be valid forever, there must be a way in which users can obtain accurate information. The research in this thesis provides such a mechanism. This research uses a verification and retrieval process to allow users to access the necessary information. This method also reduces the load placed upon a Certificate Authority by making the user perform more operations.
Subjects/Keywords: computer science.; Major computer science.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Collins, T. L. (2012). An efficient public key infrastructure revocation mechanism. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2000-THESIS-C638
Chicago Manual of Style (16th Edition):
Collins, Tamara Lyn. “An efficient public key infrastructure revocation mechanism.” 2012. Masters Thesis, Texas A&M University. Accessed February 27, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2000-THESIS-C638.
MLA Handbook (7th Edition):
Collins, Tamara Lyn. “An efficient public key infrastructure revocation mechanism.” 2012. Web. 27 Feb 2021.
Vancouver:
Collins TL. An efficient public key infrastructure revocation mechanism. [Internet] [Masters thesis]. Texas A&M University; 2012. [cited 2021 Feb 27].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2000-THESIS-C638.
Council of Science Editors:
Collins TL. An efficient public key infrastructure revocation mechanism. [Masters Thesis]. Texas A&M University; 2012. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2000-THESIS-C638
◁ [1] [2] [3] [4] [5] … [1363] ▶
.