You searched for +publisher:"Victoria University of Wellington" +contributor:("Welch, Ian")
.
Showing records 1 – 22 of
22 total matches.
No search limiters apply to these results.

Victoria University of Wellington
1.
Emmanuel, Michael.
Interconnection Impact Analysis of Solar Photovoltaic Systems with Distribution Networks.
Degree: 2018, Victoria University of Wellington
URL: http://hdl.handle.net/10063/7052
► As the solar PV technology continues to evolve as the most common distributed generation (DG) coupled with increasing interconnection requests, accurate modelling of the potential…
(more)
▼ As the solar PV technology continues to evolve as the most common distributed generation (DG) coupled with increasing interconnection requests, accurate modelling of the potential operational impacts of this game-changer is pivotal in order to maintain the reliability of the electric grid. The overall goal of this research is to conduct an interconnection impact analysis of solar PV systems at increasing penetration levels subject to the feeder constraints within the distribution network. This is carried out with a time series power flow analysis method to capture the time-varying nature of solar PV and load with their interactions with the distribution network device operations. Also, this thesis analyses multiple PV systems scenarios and a wide range of possible impacts to enable distribution system planners and operators understand and characterize grid operations with the integration of PV systems.
An evaluation of the operational and reliability performance of a grid-connected PV system based on IEC standards and industry guides is performed to detect design failures and avoid unnecessary delays to PV penetration. The performance analysis metrics in this research allow cross-comparison between PV systems operating under different climatic conditions. This thesis shows the significant impact of temperature on the overall performance of the PV system. This research conducts an interconnection study for spatially distributed single-phase grid-tied PV systems with a five minute-resolution load and solar irradiance data on a typical distribution feeder. Also, this research compares the performance of generator models, PQ and P |V |, for connecting PV-DG with the distribution feeder with their respective computational costs for a converged power flow solution.
More so, a method capable of computing the incremental capacity additions, measuring risks and upgrade deferral provided by PV systems deployments is investigated in this research. This thesis proposes surrogate metrics, energy exceeding normal rating and unserved energy, for evaluating system reliability and capacity usage which can be a very useful visualization tool for utilities. Also, sensitivity analysis is performed for optimal location of the PV system on the distribution network. This is important because optimal integration of PV systems is often near-optimal for network capacity relief issues as well.
This thesis models the impact of centralized PV variability on the electric grid using the wavelet variability model (WVM) which considers the key factors that affect PV variability such as PV footprint, density and cloud movement over the entire PV plant. The upscaling advantage from a single module and point irradiance sensor to geographic smoothing over the entire PV footprint in WVM is used to simulate effects of a utility-interactive PV system on the distribution feeder.
Further, the PV interconnection scenarios presented in this thesis have been modelled with different time scales ranging from seconds to hours in order to accurately capture and…
Advisors/Committee Members: Rayudu, Ramesh, Welch, Ian.
Subjects/Keywords: Photovoltaic; Electric power system; Interconnection impact analysis; Distribution systems; Electric grid; Photovoltaic systems; Distribution; Grid engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Emmanuel, M. (2018). Interconnection Impact Analysis of Solar Photovoltaic Systems with Distribution Networks. (Doctoral Dissertation). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/7052
Chicago Manual of Style (16th Edition):
Emmanuel, Michael. “Interconnection Impact Analysis of Solar Photovoltaic Systems with Distribution Networks.” 2018. Doctoral Dissertation, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/7052.
MLA Handbook (7th Edition):
Emmanuel, Michael. “Interconnection Impact Analysis of Solar Photovoltaic Systems with Distribution Networks.” 2018. Web. 15 Jan 2021.
Vancouver:
Emmanuel M. Interconnection Impact Analysis of Solar Photovoltaic Systems with Distribution Networks. [Internet] [Doctoral dissertation]. Victoria University of Wellington; 2018. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/7052.
Council of Science Editors:
Emmanuel M. Interconnection Impact Analysis of Solar Photovoltaic Systems with Distribution Networks. [Doctoral Dissertation]. Victoria University of Wellington; 2018. Available from: http://hdl.handle.net/10063/7052

Victoria University of Wellington
2.
Kumar, Santosh.
Transfer Learning for Dataset Shift in Classification and Clustering Problems.
Degree: 2017, Victoria University of Wellington
URL: http://hdl.handle.net/10063/6808
► This research focusses on the dataset shift problem in classification and clustering tasks. This thesis primarily aims to propose a domain-independent transfer learning based solution…
(more)
▼ This research focusses on the dataset shift problem in classification and clustering tasks. This thesis primarily aims to propose a domain-independent transfer learning based solution for dataset shift problem. Machine learning models have already performed well in classification, clustering and regression algorithms. Many traditional machine learning models perform well particularly under a prevailing hypothesis where training and test data instances are drawn from the identical data distribution and same feature space. However, in many real-world applications, insufficiency of labelled data instances is a challenging problem which limits the accuracy of the machine learning models. Furthermore, the distribution of the data varies over time; therefore, the model trained on previous data instances produce less accuracy for testing the newly collected data instances. The problem of dataset shift occurs when the training and testing data instances follow different data distributions or feature spaces.
Transfer learning is a suitable framework to resolve the aforementioned limitation by allowing the learning of a model particular to a domain to be used to improve the learning of other models in other domains. This is achieved by over either concurrent of the training of domains or subsequent transfer of knowledge from one domain to the other. This thesis concerns with the dataset shift problems concerning classification and clustering tasks. The overall goal of this thesis is to develop a new domain independent transfer learning approaches that are capable of solving the dataset shift problems in cross-domain classification and clustering tasks. In addition to classification tasks, we present an extensive case study of the novel web-spam features and their classification methods.
A semi-supervised cluster-then-level domain adaptation approach is proposed for cross-domain sentiment and web spam classification using transfer learning to handle the covariant data shift (both the labelling and instance-based data shift). Also, the classification accuracy despite the limitation of having very few labelled target domain data instances and entirely unlabelled source domain data instances has been improved. The experimental results reveal that for both cross-domain web spam and sentiment classification tasks the new methods significantly outperform other methods.
This thesis also considers dataset shifting in cross-domain documents clustering tasks and proposes a Hybrid Co-clustering based unsupervised transfer learning approach that handles the dataset shift in case of identical and non-identical data distribution. When compared to state-of-art methods, the experimental results reveal a significantly better performance in all cross-domain clustering tasks in both identical and non-identical data distributions.
Also, an extension of the co-clustering algorithm demonstrates the flexibility and relative importance of co-clusters by clustering the documents and microarray gene expression data. This extended co-clustering…
Advisors/Committee Members: Gao, Xiaoying, Welch, Ian.
Subjects/Keywords: Machine learning; Classification; Clustering; Dataset shift
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kumar, S. (2017). Transfer Learning for Dataset Shift in Classification and Clustering Problems. (Doctoral Dissertation). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/6808
Chicago Manual of Style (16th Edition):
Kumar, Santosh. “Transfer Learning for Dataset Shift in Classification and Clustering Problems.” 2017. Doctoral Dissertation, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/6808.
MLA Handbook (7th Edition):
Kumar, Santosh. “Transfer Learning for Dataset Shift in Classification and Clustering Problems.” 2017. Web. 15 Jan 2021.
Vancouver:
Kumar S. Transfer Learning for Dataset Shift in Classification and Clustering Problems. [Internet] [Doctoral dissertation]. Victoria University of Wellington; 2017. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/6808.
Council of Science Editors:
Kumar S. Transfer Learning for Dataset Shift in Classification and Clustering Problems. [Doctoral Dissertation]. Victoria University of Wellington; 2017. Available from: http://hdl.handle.net/10063/6808

Victoria University of Wellington
3.
Mahmood, Muhammad.
Event Reliability in Wireless Sensor Networks.
Degree: 2019, Victoria University of Wellington
URL: http://hdl.handle.net/10063/8105
► Ensuring reliable transport of data in resource-constrained Wireless Sensor Networks (WSNs) is one of the primary concerns to achieve a high degree of efficiency in…
(more)
▼ Ensuring reliable transport of data in resource-constrained Wireless Sensor Networks (WSNs) is one of the primary concerns to achieve a high degree of efficiency in monitoring and control systems. The two reliability mechanisms typically used in WSNs are packet reliability and event reliability. Packet reliability, which requires all packets from all the sensor nodes to reach the sink, can result in wastage of the sensors' limited energy resources. Event reliability, which only requires that one packet related to each event reaches the sink, exploits the overlap of the sensing regions of densely deployed sensor nodes to eliminate redundant packets from nodes in close proximity that contain duplicate information about an event.
The majority of previous research in this area focuses on packet reliability rather than event reliability. Moreover, the research that does focus on event reliability relies on the sink to impose some form of control over the flow of data in the network. The sinks' centralized control and decision-making increases the transmission of unnecessary packets, which degrades overall network performance in terms of energy, congestion and data flow.
This thesis proposes a distributed approach to the control of the flow of data in which each node makes in-node decisions using data readily available to it. This reduces the transmission of unnecessary packets, which reduces the network cost in terms of energy, congestion, and data flow. The major challenges involved in this research are to: (i) accurately identify that multiple packets are carrying information about the same event, (ii) reliably deliver the packets carrying information about the unique event, (iii) ensure that enough information about the area of interest is reliably delivered to the sink, and (iv) maintain the event coverage throughout the network.
This thesis presents the Event Reliability Protocol (ERP) and its extension, the Enhanced Event Reliability Protocol (EERP). The protocols aim for the reliable transmission of a packet containing information about each unique event to the sink while identifying and minimizing the unnecessary transmission of similar redundant packets from nodes in the region of the event. In this way, the sensor nodes consume less energy and increase the overall network lifetime. EERP uses a multilateration technique to identify multiple packets containing similar event information and thus is able to filter redundant packets of the same event. It also makes use of implicit acknowledgment (iACKs) for reliable delivery of the packets to the sink node. The process is based on the hop-by-hop mechanism where the decisions are made locally by the intermediate nodes.
The thesis reports on simulations in QualNet 5.2 for verifying the accuracy of our event identification and event reliability mechanisms employed in the ERP and EERP. The results show that EERP performs better in terms of minimizing overall packet transmission and hence the energy consumption at the sensor nodes in a WSN. Also, the results for event…
Advisors/Committee Members: Welch, Ian, Andreae, Peter.
Subjects/Keywords: Wireless Sensor Networks; Reliability; Event identification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mahmood, M. (2019). Event Reliability in Wireless Sensor Networks. (Doctoral Dissertation). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/8105
Chicago Manual of Style (16th Edition):
Mahmood, Muhammad. “Event Reliability in Wireless Sensor Networks.” 2019. Doctoral Dissertation, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/8105.
MLA Handbook (7th Edition):
Mahmood, Muhammad. “Event Reliability in Wireless Sensor Networks.” 2019. Web. 15 Jan 2021.
Vancouver:
Mahmood M. Event Reliability in Wireless Sensor Networks. [Internet] [Doctoral dissertation]. Victoria University of Wellington; 2019. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/8105.
Council of Science Editors:
Mahmood M. Event Reliability in Wireless Sensor Networks. [Doctoral Dissertation]. Victoria University of Wellington; 2019. Available from: http://hdl.handle.net/10063/8105

Victoria University of Wellington
4.
Seifert, Christian.
Cost-effective Detection of Drive-by-Download Attacks
with Hybrid Client Honeypots.
Degree: 2010, Victoria University of Wellington
URL: http://hdl.handle.net/10063/1385
► With the increasing connectivity of and reliance on computers and networks, important aspects of computer systems are under a constant threat. In particular, drive-by-download attacks…
(more)
▼ With the increasing connectivity of and reliance on computers and networks,
important aspects of computer systems are under a constant threat.
In particular, drive-by-download attacks have emerged as a new threat to
the integrity of computer systems. Drive-by-download attacks are clientside
attacks that originate fromweb servers that are visited byweb browsers.
As a vulnerable web browser retrieves a malicious web page, the malicious
web server can push malware to a user's machine that can be executed
without their notice or consent.
The detection of malicious web pages that exist on the Internet is prohibitively
expensive. It is estimated that approximately 150 million malicious
web pages that launch drive-by-download attacks exist today. Socalled
high-interaction client honeypots are devices that are able to detect
these malicious web pages, but they are slow and known to miss attacks.
Detection ofmaliciousweb pages in these quantitieswith client honeypots
would cost millions of US dollars.
Therefore, we have designed a more scalable system called a hybrid
client honeypot. It consists of lightweight client honeypots, the so-called
low-interaction client honeypots, and traditional high-interaction client
honeypots. The lightweight low-interaction client honeypots inspect web
pages at high speed and forward only likely malicious web pages to the
high-interaction client honeypot for a final classification.
For the comparison of client honeypots and evaluation of the hybrid
client honeypot system, we have chosen a cost-based evaluation method:
the true positive cost curve (TPCC). It allows us to evaluate client honeypots
against their primary purpose of identification of malicious web
pages. We show that costs of identifying malicious web pages with the
developed hybrid client honeypot systems are reduced by a factor of nine
compared to traditional high-interaction client honeypots.
The five main contributions of our work are:
High-Interaction Client Honeypot The first main contribution of
our work is the design and implementation of a high-interaction
client honeypot Capture-HPC. It is an open-source, publicly available
client honeypot research platform, which allows researchers and
security professionals to conduct research on malicious web pages
and client honeypots. Based on our client honeypot implementation
and analysis of existing client honeypots, we developed a component
model of client honeypots. This model allows researchers to
agree on the object of study, allows for focus of specific areas within
the object of study, and provides a framework for communication of
research around client honeypots.
True Positive Cost Curve As mentioned above, we have chosen a
cost-based evaluationmethod to compare and evaluate client honeypots
against their primary purpose of identification ofmaliciousweb
pages: the true positive cost curve. It takes into account the unique
characteristics of client honeypots, speed, detection accuracy, and resource
cost and provides a simple, cost-based mechanism to evaluate
and compare…
Advisors/Committee Members: Komisarczuk, Peter, Welch, Ian.
Subjects/Keywords: Intrusion detection; Honeypots; Security
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Seifert, C. (2010). Cost-effective Detection of Drive-by-Download Attacks
with Hybrid Client Honeypots. (Doctoral Dissertation). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/1385
Chicago Manual of Style (16th Edition):
Seifert, Christian. “Cost-effective Detection of Drive-by-Download Attacks
with Hybrid Client Honeypots.” 2010. Doctoral Dissertation, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/1385.
MLA Handbook (7th Edition):
Seifert, Christian. “Cost-effective Detection of Drive-by-Download Attacks
with Hybrid Client Honeypots.” 2010. Web. 15 Jan 2021.
Vancouver:
Seifert C. Cost-effective Detection of Drive-by-Download Attacks
with Hybrid Client Honeypots. [Internet] [Doctoral dissertation]. Victoria University of Wellington; 2010. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/1385.
Council of Science Editors:
Seifert C. Cost-effective Detection of Drive-by-Download Attacks
with Hybrid Client Honeypots. [Doctoral Dissertation]. Victoria University of Wellington; 2010. Available from: http://hdl.handle.net/10063/1385

Victoria University of Wellington
5.
Palmer, Benjamin Philip.
Anonymously Establishing Digital
Provenance in Reseller Chains.
Degree: 2012, Victoria University of Wellington
URL: http://hdl.handle.net/10063/2281
► An increasing number of products are exclusively digital items, such as media files, licenses, services, or subscriptions. In many cases customers do not purchase these…
(more)
▼ An increasing number of products are exclusively digital items, such as media
files, licenses, services, or subscriptions. In many cases customers do not
purchase these items directly from the originator of the product but through a
reseller instead. Examples of some well known resellers include GoDaddy, the
iTunes music store, and Amazon.
This thesis considers the concept of provenance of digital items in reseller
chains. Provenance is defined as the origin and ownership history of an item. In
the context of digital items, the origin of the item refers to the supplier that created
it and the ownership history establishes a chain of ownership from the supplier to
the customer. While customers and suppliers are concerned with the provenance
of the digital items, resellers will not want the details of the transactions they have
taken part in made public. Resellers will require the provenance information to
be anonymous and unlinkable to prevent third parties building up large amounts
of information on the transactions of resellers. This thesis develops security
mechanisms that provide customers and suppliers with assurances about the
provenance of a digital item, even when the reseller is untrusted, while providing
anonymity and unlinkability for resellers .
The main contribution of this thesis is the design, development, and analysis
of the tagged transaction protocol. A formal description of the problem and
the security properties for anonymously providing provenance for digital items
in reseller chains are defined. A thorough security analysis using proofs by
contradiction shows the protocol fulfils the security requirements. This security
analysis is supported by modelling the protocol and security requirements
using Communicating Sequential Processes (CSP) and the Failures Divergences
Refinement (FDR) model checker. An extended version of the tagged transaction
protocol is also presented that provides revocable anonymity for resellers that
try to conduct a cloning attack on the protocol. As well as an analysis of the
security of the tagged transaction protocol, a performance analysis is conducted
providing complexity results as well as empirical results from an implementation
of the protocol.
Advisors/Committee Members: Bubendorfer, Kris, Welch, Ian.
Subjects/Keywords: Provenance; e-Commerce; Verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Palmer, B. P. (2012). Anonymously Establishing Digital
Provenance in Reseller Chains. (Doctoral Dissertation). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/2281
Chicago Manual of Style (16th Edition):
Palmer, Benjamin Philip. “Anonymously Establishing Digital
Provenance in Reseller Chains.” 2012. Doctoral Dissertation, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/2281.
MLA Handbook (7th Edition):
Palmer, Benjamin Philip. “Anonymously Establishing Digital
Provenance in Reseller Chains.” 2012. Web. 15 Jan 2021.
Vancouver:
Palmer BP. Anonymously Establishing Digital
Provenance in Reseller Chains. [Internet] [Doctoral dissertation]. Victoria University of Wellington; 2012. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/2281.
Council of Science Editors:
Palmer BP. Anonymously Establishing Digital
Provenance in Reseller Chains. [Doctoral Dissertation]. Victoria University of Wellington; 2012. Available from: http://hdl.handle.net/10063/2281

Victoria University of Wellington
6.
Nowitz, Jayden.
A Modern Perspective on Phishing: An investigation into susceptibility to phishing attacks between mobile and desktop email clients.
Degree: 2018, Victoria University of Wellington
URL: http://hdl.handle.net/10063/7907
► Research on how to counter phishing from a user behavior perspective has been explored for over a decade, yet the prevalence of such threats is…
(more)
▼ Research on how to counter phishing from a user behavior perspective has been explored for over a decade, yet the prevalence of such threats is increasing. This thesis aims to provide a modern perspective by considering if there is a difference in how susceptible an individual is on a mobile device versus a desktop email client. Currently very few studies consider phishing on mobile devices and the research is unclear as to the potential difference in susceptibility rates between the two device types. Initially a review of 60 phishing emails received by the
university that had passed mail filtering were used to assist in the design of the messages to be used in the second stage of the study. Following this a simulated phishing attack on two groups in one unit of professional administrative staff in the
university (141 in total with 71 in Group A and 70 in Group B) within the
university was undertaken. The defining characteristic between the groups was how they responded to a message with a ‘loss versus gain’ appeal. This area has received limited exploration in the research and findings remain unclear. This study found that people were statistically far more susceptible to the ‘gain’ message of a free coffee at 28.2% than the ‘loss’ message of Office365 account suspension 7.1% . For device type there appears to be no statistically significant difference, even between the groups. This study highlights the complexities of device usage around phishing, that have not been clearly highlighted in previous studies, such as people viewing emails with one device and falling victim on another device.
Advisors/Committee Members: Welch, Ian, Hooper, Val.
Subjects/Keywords: Phishing; Mobile; Desktop
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Nowitz, J. (2018). A Modern Perspective on Phishing: An investigation into susceptibility to phishing attacks between mobile and desktop email clients. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/7907
Chicago Manual of Style (16th Edition):
Nowitz, Jayden. “A Modern Perspective on Phishing: An investigation into susceptibility to phishing attacks between mobile and desktop email clients.” 2018. Masters Thesis, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/7907.
MLA Handbook (7th Edition):
Nowitz, Jayden. “A Modern Perspective on Phishing: An investigation into susceptibility to phishing attacks between mobile and desktop email clients.” 2018. Web. 15 Jan 2021.
Vancouver:
Nowitz J. A Modern Perspective on Phishing: An investigation into susceptibility to phishing attacks between mobile and desktop email clients. [Internet] [Masters thesis]. Victoria University of Wellington; 2018. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/7907.
Council of Science Editors:
Nowitz J. A Modern Perspective on Phishing: An investigation into susceptibility to phishing attacks between mobile and desktop email clients. [Masters Thesis]. Victoria University of Wellington; 2018. Available from: http://hdl.handle.net/10063/7907

Victoria University of Wellington
7.
Shepherd, Deb.
Developing the Fringe Routing Protocol.
Degree: 2011, Victoria University of Wellington
URL: http://hdl.handle.net/10063/1981
► An ISP style network often has a particular traffic pattern not typically seen in other networks and which is a direct result of the ISP’s…
(more)
▼ An ISP style network often has a particular traffic pattern not typically seen in other
networks and which is a direct result of the ISP’s purpose, to connect internal clients with a
high speed external link. Such a network is likely to consist of a backbone with the clients
on one ‘side’ and one or more external links on the other. Most traffic on the network
moves between an internal client and the external world via the backbone.
But what about traffic between two clients of the ISP? Typical routing protocols will find
the ‘best’ path between the two gateway routers at the edge of the client stub networks.
As these routers connect the stubs to the ISP core, this route should be entirely within the
ISP network. Ideally, from the ISP point of view, this traffic will go up to the backbone and
down again but it is possible that it may find another route along a redundant backup path.
Don Stokes of Knossos Networks has developed a protocol to sit on the client fringes of
this ISP style of network. It is based on the distance vector algorithm and is intended to
be subordinate to the existing interior gateway protocol running on the ISPs backbone. It
manipulates the route cost calculation so that paths towards the backbone become very
cheap and paths away from the backbone become expensive. This forces traffic in the
preferred direction unless the backup path ‘shortcut’ is very attractive or the backbone link
has disappeared.
It is the analysis and development of the fringe routing protocol that forms the content of
this ME thesis.
Advisors/Committee Members: Linton, Andy, Welch, Ian.
Subjects/Keywords: Computer networking; Routing; Fringe routing protocol
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shepherd, D. (2011). Developing the Fringe Routing Protocol. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/1981
Chicago Manual of Style (16th Edition):
Shepherd, Deb. “Developing the Fringe Routing Protocol.” 2011. Masters Thesis, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/1981.
MLA Handbook (7th Edition):
Shepherd, Deb. “Developing the Fringe Routing Protocol.” 2011. Web. 15 Jan 2021.
Vancouver:
Shepherd D. Developing the Fringe Routing Protocol. [Internet] [Masters thesis]. Victoria University of Wellington; 2011. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/1981.
Council of Science Editors:
Shepherd D. Developing the Fringe Routing Protocol. [Masters Thesis]. Victoria University of Wellington; 2011. Available from: http://hdl.handle.net/10063/1981

Victoria University of Wellington
8.
Radford, Paul.
Improving Salience Retention and
Identification in the Automated Filtering
of Event Log Messages.
Degree: 2011, Victoria University of Wellington
URL: http://hdl.handle.net/10063/2077
► Event log messages are currently the only genuine interface through which computer systems administrators can effectively monitor their systems and assemble a mental perception of…
(more)
▼ Event log messages are currently the only genuine interface through which computer systems
administrators can effectively monitor their systems and assemble a mental perception
of system state. The popularisation of the Internet and the accompanying meteoric
growth of business-critical systems has resulted in an overwhelming volume of event log
messages, channeled through mechanisms whose designers could not have envisaged the
scale of the problem. Messages regarding intrusion detection, hardware status, operating
system status changes, database tablespaces, and so on, are being produced at the rate
of many gigabytes per day for a significant computing environment.
Filtering technologies have not been able to keep up. Most messages go unnoticed; no
filtering whatsoever is performed on them, at least in part due to the difficulty of implementing
and maintaining an effective filtering solution. The most commonly-deployed
filtering alternatives rely on regular expressions to match pre-defi ned strings, with 100%
accuracy, which can then become ineffective as the code base for the software producing
the messages 'drifts' away from those strings. The exactness requirement means all possible
failure scenarios must be accurately anticipated and their events catered for with
regular expressions, in order to make full use of this technique.
Alternatives to regular expressions remain largely academic. Data mining, automated
corpus construction, and neural networks, to name the highest-profi le ones, only produce
probabilistic results and are either difficult or impossible to alter in any deterministic way.
Policies are therefore not supported under these alternatives.
This thesis explores a new architecture which utilises rich metadata in order to avoid the
burden of message interpretation. The metadata itself is based on an intention to improve
end-to-end communication and reduce ambiguity. A simple yet effective filtering scheme
is also presented which fi lters log messages through a short and easily-customisable set
of rules. With such an architecture, it is envisaged that systems administrators could
signi ficantly improve their awareness of their systems while avoiding many of the false-positives
and -negatives which plague today's fi ltering solutions.
Advisors/Committee Members: Welch, Ian, Linton, Andy.
Subjects/Keywords: Salience; Messages; Filtering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Radford, P. (2011). Improving Salience Retention and
Identification in the Automated Filtering
of Event Log Messages. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/2077
Chicago Manual of Style (16th Edition):
Radford, Paul. “Improving Salience Retention and
Identification in the Automated Filtering
of Event Log Messages.” 2011. Masters Thesis, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/2077.
MLA Handbook (7th Edition):
Radford, Paul. “Improving Salience Retention and
Identification in the Automated Filtering
of Event Log Messages.” 2011. Web. 15 Jan 2021.
Vancouver:
Radford P. Improving Salience Retention and
Identification in the Automated Filtering
of Event Log Messages. [Internet] [Masters thesis]. Victoria University of Wellington; 2011. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/2077.
Council of Science Editors:
Radford P. Improving Salience Retention and
Identification in the Automated Filtering
of Event Log Messages. [Masters Thesis]. Victoria University of Wellington; 2011. Available from: http://hdl.handle.net/10063/2077

Victoria University of Wellington
9.
Chard, Ryan.
Reputation Description and
Interpretation.
Degree: 2012, Victoria University of Wellington
URL: http://hdl.handle.net/10063/2303
► Reputation is an opinion held by others about a particular person, group, organisation, or resource. As a tool, reputation can be used to forecast the…
(more)
▼ Reputation is an opinion held by others about a particular person, group,
organisation, or resource. As a tool, reputation can be used to forecast the
reliability of others based on their previous actions, moreover, in some domains
it can even be used to estimate trustworthiness. Due to the large
scale of virtual communities it is impossible to maintain a meaningful relationship
with every member. Reputation systems are designed explicitly
to manufacture trust within a virtual community by recording and sharing
information regarding past interactions. Reputation systems are becoming
increasingly popular and widespread, with the information generated
varying considerably between domains. Currently, no formal method to
exchange reputation information exists. However, the OpenRep framework,
currently under development, is designed to federate reputation information,
enabling the transparent exchange of information between reputation
systems. This thesis presents a reputation description and interpretation
system, designed as a foundation for the OpenRep framework.
The description and interpretation system focuses on enabling the consistent
and reliable expression and interpretation of reputation information
across heterogeneous reputation systems. The description and interpretation
system includes a strongly typed language, a verification system
to validate usage of the language, and a XML based exchange protocol. In
addition to these contributions, three case studies are presented as a means
of generating requirements for the description and interpretation system,
and evaluating the use of the proposed system in a federated reputation
environment. The case studies include an electronic auction, virtual community
and social network based relationship management service.
Advisors/Committee Members: Bubendorfer, Kris, Welch, Ian.
Subjects/Keywords: Reputation exchange; Trust; OpenRep
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chard, R. (2012). Reputation Description and
Interpretation. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/2303
Chicago Manual of Style (16th Edition):
Chard, Ryan. “Reputation Description and
Interpretation.” 2012. Masters Thesis, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/2303.
MLA Handbook (7th Edition):
Chard, Ryan. “Reputation Description and
Interpretation.” 2012. Web. 15 Jan 2021.
Vancouver:
Chard R. Reputation Description and
Interpretation. [Internet] [Masters thesis]. Victoria University of Wellington; 2012. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/2303.
Council of Science Editors:
Chard R. Reputation Description and
Interpretation. [Masters Thesis]. Victoria University of Wellington; 2012. Available from: http://hdl.handle.net/10063/2303

Victoria University of Wellington
10.
Iqbal, Aun Haji.
Development of Osix - A Peering Point in a Box.
Degree: 2013, Victoria University of Wellington
URL: http://hdl.handle.net/10063/2699
► Sending traffic over international communication links is much more expensive than sending traffic locally. Unfortunately, there are situations where two local networks end up using…
(more)
▼ Sending traffic over international communication links is much more expensive than sending traffic locally. Unfortunately, there are situations where two local networks end up using the international links because there are no national links between the two networks. To avoid this, a peering relationship should be established between these two local networks to allow them to directly exchange traffic. Peering relationships are implemented at Internet Exchange Points (IXPs).
There has been a significant increase in the number of IXPs in developed countries but take up in developing countries has been slow despite these countries having the most to gain due to the high prices that they pay for international bandwidth. Research has identified that lack of technical skills is a key barrier to the deployment of IXPs in these countries. In particular, although skills exist to maintain an IXP there is a lack of technical expertise to integrate individual tools together to implement an IXP.
The goal of this thesis is to develop an integrated IXP solution that could be easily deployed in developing countries. This content of this thesis includes analysis of the requirements for such a solution, development of a design, description of implementation trade-offs and evaluation of a final solution
Advisors/Committee Members: Welch, Ian, Linton, Andy.
Subjects/Keywords: Peering; Exchange; Routing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Iqbal, A. H. (2013). Development of Osix - A Peering Point in a Box. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/2699
Chicago Manual of Style (16th Edition):
Iqbal, Aun Haji. “Development of Osix - A Peering Point in a Box.” 2013. Masters Thesis, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/2699.
MLA Handbook (7th Edition):
Iqbal, Aun Haji. “Development of Osix - A Peering Point in a Box.” 2013. Web. 15 Jan 2021.
Vancouver:
Iqbal AH. Development of Osix - A Peering Point in a Box. [Internet] [Masters thesis]. Victoria University of Wellington; 2013. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/2699.
Council of Science Editors:
Iqbal AH. Development of Osix - A Peering Point in a Box. [Masters Thesis]. Victoria University of Wellington; 2013. Available from: http://hdl.handle.net/10063/2699

Victoria University of Wellington
11.
Thomson, Wayne.
GAF: A General Auction Framework for Secure Combinatorial Auctions.
Degree: 2013, Victoria University of Wellington
URL: http://hdl.handle.net/10063/3247
► Auctions are an economic mechanism for allocating goods to interested parties. There are many methods, each of which is an Auction Protocol. Some protocols are…
(more)
▼ Auctions are an economic mechanism for allocating goods to interested parties. There are many methods, each of which is an Auction Protocol. Some protocols are relatively simple such as English and Dutch auctions, but there are also more complicated auctions, for example combinatorial auctions which sell multiple goods at a time, and secure auctions which incorporate security solutions. Corresponding to the large number of protocols, there is a variety of purposes for which protocols are used. Each protocol has different properties and they differ between how applicable they are to a particular domain.
In this thesis, the protocols explored are privacy preserving secure combinatorial auctions which are particularly well suited to our target domain of computational grid system resource allocation. In grid resource allocation systems, goods are best sold in sets as bidders value different sets of goods differently. For example, when purchasing CPU cycles, memory is also required but a bidder may additionally require network bandwidth. In untrusted distributed systems such as a publicly accessible grid, security properties are paramount. The type of secure combinatorial auction protocols explored in this thesis are privacy preserving protocols which hide the bid values of losing bidder’s bids. These protocols allow bidders to place bids without fear of private information being leaked.
With the large number of permutations of different protocols and configurations, it is difficult to manage the idiosyncrasies of many different protocol implementations within an individual application. This thesis proposes a specification, design, and implementation for a General Auction Framework (GAF). GAF provides a consistent method of implementing different types of auction protocols from the standard English auction through to the more complicated combinatorial and secure auctions. The benefit of using GAF is the ability to easily leverage multiple protocols within a single application due to the consistent specification of protocol construction.
The framework has be tested with three different protocols: the Secure Polynomial auction protocol, the Secure Homomorphic auction protocol and the Secure Garbled Circuits auction protocol. These three protocols and a statistics collecting application is a proof of concept for the framework and provides the beginning of an analysis designed at determining suitable protocol candidates for grid systems.
Advisors/Committee Members: Bubendorfer, Kris, Welch, Ian.
Subjects/Keywords: Auction; Security; Framework
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Thomson, W. (2013). GAF: A General Auction Framework for Secure Combinatorial Auctions. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/3247
Chicago Manual of Style (16th Edition):
Thomson, Wayne. “GAF: A General Auction Framework for Secure Combinatorial Auctions.” 2013. Masters Thesis, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/3247.
MLA Handbook (7th Edition):
Thomson, Wayne. “GAF: A General Auction Framework for Secure Combinatorial Auctions.” 2013. Web. 15 Jan 2021.
Vancouver:
Thomson W. GAF: A General Auction Framework for Secure Combinatorial Auctions. [Internet] [Masters thesis]. Victoria University of Wellington; 2013. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/3247.
Council of Science Editors:
Thomson W. GAF: A General Auction Framework for Secure Combinatorial Auctions. [Masters Thesis]. Victoria University of Wellington; 2013. Available from: http://hdl.handle.net/10063/3247

Victoria University of Wellington
12.
Davenport, Hugh.
Implementation and
Evaluation of Security
Protocols in E-Commerce
Applications.
Degree: 2013, Victoria University of Wellington
URL: http://hdl.handle.net/10063/2631
► There are a large amount of programs in the development process in todays technology environment, and many of these involve some type of security needs.…
(more)
▼ There are a large amount of programs in the development process in todays
technology environment, and many of these involve some type of
security needs. These needs are usually not dealt with in a sensible way
and some even don’t bother with any analysis. This thesis describes a solution
of implementing a secure protocol, and gives an evaluation of the
process along with the techniques and tools to aid a secure design and
implementation process. This allows others to take this knowledge into
account when building other applications which have a need for security
development.
Advisors/Committee Members: Bubendorfer, Kris, Welch, Ian.
Subjects/Keywords: Security; Evaluation; Protocol
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Davenport, H. (2013). Implementation and
Evaluation of Security
Protocols in E-Commerce
Applications. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/2631
Chicago Manual of Style (16th Edition):
Davenport, Hugh. “Implementation and
Evaluation of Security
Protocols in E-Commerce
Applications.” 2013. Masters Thesis, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/2631.
MLA Handbook (7th Edition):
Davenport, Hugh. “Implementation and
Evaluation of Security
Protocols in E-Commerce
Applications.” 2013. Web. 15 Jan 2021.
Vancouver:
Davenport H. Implementation and
Evaluation of Security
Protocols in E-Commerce
Applications. [Internet] [Masters thesis]. Victoria University of Wellington; 2013. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/2631.
Council of Science Editors:
Davenport H. Implementation and
Evaluation of Security
Protocols in E-Commerce
Applications. [Masters Thesis]. Victoria University of Wellington; 2013. Available from: http://hdl.handle.net/10063/2631

Victoria University of Wellington
13.
Yang, Kaishuo.
Reverse Engineering of an Obfuscated Binary.
Degree: 2020, Victoria University of Wellington
URL: http://hdl.handle.net/10063/9244
► Reverse engineering is an important process employed by both attackers seeking to gain entry to a system as well as the security engineers that protect…
(more)
▼ Reverse engineering is an important process employed by both attackers seeking to gain entry to a system as well as the security engineers that protect it. While there are numerous tools developed for this purpose, they often can be tedious to use and rely on prior obtained domain knowledge. After examining a number of contemporary tools, we design and implement a de-noising tool that reduces the human effort needed to perform reverse engineering. The tool takes snapshots of a target program's memory as the user consistently interacts with it. By comparing changes across multiple sets of snapshots, consistent changes in memory that could be attributed to the user action are identified. We go on to demonstrate its use on three Windows applications: Minesweeper, Solitaire and Notepad++. Through assistance from the de-noising tool, we were able to discover information such as the location of mines and values of cards in these two games before they are revealed, and the data structure used for input to Notepad++.
Advisors/Committee Members: Pearce, David, Welch, Ian.
Subjects/Keywords: reverse engineering; disassembly; memory analysis; reverse engineering in games
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yang, K. (2020). Reverse Engineering of an Obfuscated Binary. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/9244
Chicago Manual of Style (16th Edition):
Yang, Kaishuo. “Reverse Engineering of an Obfuscated Binary.” 2020. Masters Thesis, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/9244.
MLA Handbook (7th Edition):
Yang, Kaishuo. “Reverse Engineering of an Obfuscated Binary.” 2020. Web. 15 Jan 2021.
Vancouver:
Yang K. Reverse Engineering of an Obfuscated Binary. [Internet] [Masters thesis]. Victoria University of Wellington; 2020. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/9244.
Council of Science Editors:
Yang K. Reverse Engineering of an Obfuscated Binary. [Masters Thesis]. Victoria University of Wellington; 2020. Available from: http://hdl.handle.net/10063/9244

Victoria University of Wellington
14.
Truong, Huu Trung.
Software-Defined Network Application for Inter-domain Routing in Transit ISPs.
Degree: 2020, Victoria University of Wellington
URL: http://hdl.handle.net/10063/9150
► Today, the Internet plays an vital part in our society. We rely greatly on the Internet to work, to communicate and to entertain. The Internet…
(more)
▼ Today, the Internet plays an vital part in our society. We rely greatly on the Internet to work, to communicate and to entertain. The Internet is a very large and complex computer network, consisting of tens of thousands of networks called autonomous systems (ASes). The key routing protocol for interdomain routing between ASes, Border Gateway Protocol (BGP), was invented three decades ago. Although BGP has undergone many improvements, many fundamental problems and limitations of BGP still exist today. For instance, BGP does not have the resiliency to attacks and good support for traffic engineering. To date, many evolutionary and revolutionary solutions have been proposed to address these problems. However, very few were adopted. While there are many reasons for this limited adoption, one may blame for the lack of deployability, scalability and more importantly adequate functionality.
SDN is a new networking paradigm that decouples the control plane and data plane. SDN breaks the ossification of the Internet and enables network innovations. SDN has been thoroughly investigated for the enterprise environment. This research investigates the application of SDN for interdomain routing in transit ASes. Specifically, the main goal is to study how SDN capabilities can be utilised to develop a scalable, programmable and flexible routing architecture for transit ISPs.
Advisors/Committee Members: Welch, Ian, Ng, Bryan.
Subjects/Keywords: SDN; Interdomain; ISP; Routing; BGP
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Truong, H. T. (2020). Software-Defined Network Application for Inter-domain Routing in Transit ISPs. (Doctoral Dissertation). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/9150
Chicago Manual of Style (16th Edition):
Truong, Huu Trung. “Software-Defined Network Application for Inter-domain Routing in Transit ISPs.” 2020. Doctoral Dissertation, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/9150.
MLA Handbook (7th Edition):
Truong, Huu Trung. “Software-Defined Network Application for Inter-domain Routing in Transit ISPs.” 2020. Web. 15 Jan 2021.
Vancouver:
Truong HT. Software-Defined Network Application for Inter-domain Routing in Transit ISPs. [Internet] [Doctoral dissertation]. Victoria University of Wellington; 2020. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/9150.
Council of Science Editors:
Truong HT. Software-Defined Network Application for Inter-domain Routing in Transit ISPs. [Doctoral Dissertation]. Victoria University of Wellington; 2020. Available from: http://hdl.handle.net/10063/9150

Victoria University of Wellington
15.
Tariq, Hassan.
Resource and Performance Modelling of Hadoop Clusters Using Machine Learning.
Degree: 2020, Victoria University of Wellington
URL: http://hdl.handle.net/10063/8958
► There is a huge and rapidly increasing amount of data being generated by social media, mobile applications and sensing devices. Big data is the term…
(more)
▼ There is a huge and rapidly increasing amount of data being generated by social media, mobile applications and sensing devices. Big data is the term usually used to describe such data and is described in terms of the 3Vs - volume, variety and velocity. In order to process and mine such a massive amount of data, several approaches and platforms have been developed such as Hadoop. Hadoop is a popular open source distributed and parallel computing framework. It has a large number of configurable parameters which can be set before the execution of jobs to optimize the resource utilization and execution time of the clusters. These parameters have a significant impact on system resources and execution time. Optimizing the performance of a Hadoop cluster by tuning such a large number of parameters is a tedious task. Most current big data modeling approaches do not include the complex interaction between configuration parameters and the cluster environment changes such as use of different datasets or types of query. This makes it difficult to predict for example the execution time of a job or resource utilization of a cluster. Other attributes include configuration parameters, the structure of query, the dataset, number of nodes and the infrastructure used.
Our first main objective was to design reliable experiments to understand the relationship between attributes. Before designing and implementing the actual experiment we applied Hazard and Operability (HAZOP) analysis to identify operational hazards. These hazards can affect normal working of cluster and execution of Hadoop jobs. This brainstorming activity improved the design and implementation of our experiments by improving the internal validity of the experiments. It also helped us to identify the considerations that must be taken into account for reliable results. After implementing our design, we characterized the relationship between different Hadoop configuration parameters, network and system performance measures.
Our second main objective was to investigate the use of machine learning to model and predict the resource utilization and execution time of Hadoop jobs. Resource utilization and execution time of Hadoop jobs are affected by different attributes such as configuration parameters and structure of query. In order to estimate or predict either qualitatively or quantitatively the level of resource utilization and execution time, it is important to understand the impact of different combinations of these Hadoop job attributes. You could conduct experiments with many different combinations of parameters to uncover this but it is very difficult to run such a large number of jobs with different combinations of Hadoop job attributes and then interpret the data manually. It is very difficult to extract patterns from the data and give a model that can generalize for an unseen scenario. In order to automate the process of data extraction and modeling the complex behavior of different attributes of Hadoop job machine learning was used. Our decision tree based…
Advisors/Committee Members: Welch, Ian, Al-Sahaf, Harith.
Subjects/Keywords: Modelling and prediction; Performance; Hadoop Clusters; Machine learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tariq, H. (2020). Resource and Performance Modelling of Hadoop Clusters Using Machine Learning. (Doctoral Dissertation). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/8958
Chicago Manual of Style (16th Edition):
Tariq, Hassan. “Resource and Performance Modelling of Hadoop Clusters Using Machine Learning.” 2020. Doctoral Dissertation, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/8958.
MLA Handbook (7th Edition):
Tariq, Hassan. “Resource and Performance Modelling of Hadoop Clusters Using Machine Learning.” 2020. Web. 15 Jan 2021.
Vancouver:
Tariq H. Resource and Performance Modelling of Hadoop Clusters Using Machine Learning. [Internet] [Doctoral dissertation]. Victoria University of Wellington; 2020. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/8958.
Council of Science Editors:
Tariq H. Resource and Performance Modelling of Hadoop Clusters Using Machine Learning. [Doctoral Dissertation]. Victoria University of Wellington; 2020. Available from: http://hdl.handle.net/10063/8958

Victoria University of Wellington
16.
Stevens, Matt.
Applying Formal Modelling to the Specification and Testing of SDN Network Functionality.
Degree: 2016, Victoria University of Wellington
URL: http://hdl.handle.net/10063/6179
► Software Defined Networks offers a new paradigm to manage networks, one that favors centralised control over the distributed control used in legacy networks. This brings…
(more)
▼ Software Defined Networks offers a new paradigm to manage networks, one that favors centralised control over the distributed control used in legacy networks. This brings network operators potential efficiencies in capital investment, operating costs and wider choice in network appliance providers. We explore in this research whether these efficiencies apply to all network functionality by applying formal modelling to create a mathematically rigourous model of a service, a firewall, and using that model to derive tests that are ultimately applied to two SDN firewalls and a legacy stateful firewall. In the process we discover the only publicly available examples of SDN firewalls are not equivalent to legacy stateful firewalls and in fact create a security flaw that may be exploited by an attacker.
Advisors/Committee Members: Ng, Bryan, Streader, David, Welch, Ian.
Subjects/Keywords: Formal methods; SDN; Firewall; Security; Architecture; Model based testing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Stevens, M. (2016). Applying Formal Modelling to the Specification and Testing of SDN Network Functionality. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/6179
Chicago Manual of Style (16th Edition):
Stevens, Matt. “Applying Formal Modelling to the Specification and Testing of SDN Network Functionality.” 2016. Masters Thesis, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/6179.
MLA Handbook (7th Edition):
Stevens, Matt. “Applying Formal Modelling to the Specification and Testing of SDN Network Functionality.” 2016. Web. 15 Jan 2021.
Vancouver:
Stevens M. Applying Formal Modelling to the Specification and Testing of SDN Network Functionality. [Internet] [Masters thesis]. Victoria University of Wellington; 2016. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/6179.
Council of Science Editors:
Stevens M. Applying Formal Modelling to the Specification and Testing of SDN Network Functionality. [Masters Thesis]. Victoria University of Wellington; 2016. Available from: http://hdl.handle.net/10063/6179
17.
Mansoori, Masood.
Localisation of Attacks, Combating Browser-Based Geo-Information and IP Tracking Attacks.
Degree: 2017, Victoria University of Wellington
URL: http://hdl.handle.net/10063/6567
► Accessing and retrieving users’ browser and network information is a common practice used by advertisers and many online services to deliver targeted ads and explicit…
(more)
▼ Accessing and retrieving users’ browser and network information is a common practice used by advertisers and many online services to deliver targeted ads and explicit improved services to users belonging to a particular group. They provide a great deal of information about a user’s geographical location, ethnicity, language, culture and general interests. However, in the same way these techniques have proven effective in advertising services, they can be used by attackers to launch targeted attacks against specific user groups. Targeted attacks have been proven more effective against user groups than their blind untargeted counterparts (e.g.spam, phishing). Their detection is more challenging as the detection tools need to be located within the targeted user group. This is one of the challenges faced by security researchers and organisations involved in the detection of new malware and exploits, using client honeypots. Client honeypots are detection systems used in the identification of malicious web sites. The client honeypot needs to mimic users in a pre-defined location, system, network and personality for which the malware is intended. The case is amplified by the use of Browser Exploit Packs/kits (BEPs), supporting these features. BEPs provide simplicity in deployment of targeted malicious web sites. They allow attackers to utilise specific geographical locations, network information, visit patterns or browser header information obtained from a visiting user to determine if a user should be subjected to an attack.
Malicious web sites that operate based on targeted techniques can disguise themselves as legitimate web sites and bypass detection. Benign content is delivered to attacker-specified users while avoiding delivery to suspicious systems such as well-known or possible subnets that may host client honeypots. A client honeypot deployed in a single location with a single IP address will fail to detect an attack targeted at users in different demographic and network subnets. Failure in detection of such attacks results in high rates of false negatives which affect all honeypots regardless of detection technique or interaction level. BEPs are hugely popular and most include tracking features. The number of malicious web sites that utilise these features is currently unknown. There are very few studies that have addressed identifying the rate and number of malicious web sites utilising these techniques and no available client honeypot system is currently able to detect them. Any failure to detect these web sites will result in unknown numbers of users being exploited and infected with malware. The false negatives resulting from failing to detect these web sites can incorrectly be interpreted as a decline in the number of attacks.
In this work, a study of information that can potentially expose users to targeted attack through a browser is examined through experimental analysis. Concrete approaches by attackers to obtain user-specific information in the deployment of targeted attacks through browsers are…
Advisors/Committee Members: Welch, Ian.
Subjects/Keywords: Geolocation Attacks; HAZOP; Client Honeypots; Browser Based Attacks; IP Tracking; Browser Exploit Kits; YALIH; Localized Attacks; Targeted attacks; Honeypots; Honey clients; Hazard and Operability
…locate a user .
33
3.2
HTTP header information of a client at Victoria university of… …Wellington, captured by a remote server . . . . . . . . . . . . .
40
Locale settings detected by…
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mansoori, M. (2017). Localisation of Attacks, Combating Browser-Based Geo-Information and IP Tracking Attacks. (Doctoral Dissertation). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/6567
Chicago Manual of Style (16th Edition):
Mansoori, Masood. “Localisation of Attacks, Combating Browser-Based Geo-Information and IP Tracking Attacks.” 2017. Doctoral Dissertation, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/6567.
MLA Handbook (7th Edition):
Mansoori, Masood. “Localisation of Attacks, Combating Browser-Based Geo-Information and IP Tracking Attacks.” 2017. Web. 15 Jan 2021.
Vancouver:
Mansoori M. Localisation of Attacks, Combating Browser-Based Geo-Information and IP Tracking Attacks. [Internet] [Doctoral dissertation]. Victoria University of Wellington; 2017. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/6567.
Council of Science Editors:
Mansoori M. Localisation of Attacks, Combating Browser-Based Geo-Information and IP Tracking Attacks. [Doctoral Dissertation]. Victoria University of Wellington; 2017. Available from: http://hdl.handle.net/10063/6567
18.
Esnaashari, Shadi.
Invisible Barriers: Identifying restrictions affecting New Zealanders' access to the Internet.
Degree: 2014, Victoria University of Wellington
URL: http://hdl.handle.net/10063/3263
► The Internet is an important technology worldwide. People use the Internet for research, communication, shopping, entertainment, etc. In addition to these benefits, the Internet provides…
(more)
▼ The Internet is an important technology worldwide. People use the Internet for research, communication, shopping, entertainment, etc. In addition to these benefits, the Internet provides access to dangerous or illegal material. Because of this, some content and services may be blocked by governments, Internet Service Providers, organizations, or individuals. This blocking, whether for security or for network efficiency, has significant effects on people’s access to services and information, which may not be considered when implementing restrictions. Although studies have been conducted on Internet blocking in many countries, no one has yet examined what is being blocked in New Zealand. In this thesis, we measured the prevalence of Internet blocking in New Zealand and the reasons leading to a decision to block access to websites or Internet services. Although several different tools existed, they could not be used directly because they either concentrated on a narrow range of services or did not work in an environment where some services they depended upon were blocked. For this reason, we developed our own tool called WCMT based on the issues identified from previous tools. We conducted our study using WCMT in order to identify blocked websites and services in our quantitative analysis, complemented by interviews with key informants in our qualitative analysis.
Advisors/Committee Members: Welch, Ian, Chawner, Brenda.
Subjects/Keywords: Monitoring tool; Blocking content; Blocking services
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Esnaashari, S. (2014). Invisible Barriers: Identifying restrictions affecting New Zealanders' access to the Internet. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/3263
Chicago Manual of Style (16th Edition):
Esnaashari, Shadi. “Invisible Barriers: Identifying restrictions affecting New Zealanders' access to the Internet.” 2014. Masters Thesis, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/3263.
MLA Handbook (7th Edition):
Esnaashari, Shadi. “Invisible Barriers: Identifying restrictions affecting New Zealanders' access to the Internet.” 2014. Web. 15 Jan 2021.
Vancouver:
Esnaashari S. Invisible Barriers: Identifying restrictions affecting New Zealanders' access to the Internet. [Internet] [Masters thesis]. Victoria University of Wellington; 2014. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/3263.
Council of Science Editors:
Esnaashari S. Invisible Barriers: Identifying restrictions affecting New Zealanders' access to the Internet. [Masters Thesis]. Victoria University of Wellington; 2014. Available from: http://hdl.handle.net/10063/3263
19.
Koay, Abigail.
Detecting High and Low Intensity Distributed Denial of Service (DDoS) Attacks.
Degree: 2019, Victoria University of Wellington
URL: http://hdl.handle.net/10063/8069
► High and low-intensity attacks are two common Distributed Denial of Service (DDoS) attacks that disrupt Internet users and their daily operations. Detecting these attacks is…
(more)
▼ High and low-intensity attacks are two common Distributed Denial of Service (DDoS) attacks that disrupt Internet users and their daily operations. Detecting these attacks is important to ensure that communication, business operations, and education facilities can run smoothly. Many DDoS attack detection systems have been proposed in the past but still lack performance, scalability, and information sharing ability to detect both high and low-intensity DDoS attacks accurately and early. To combat these issues, this thesis studies the use of Software-Defined Networking technology, entropy-based features, and machine learning classifiers to develop three useful components, namely a good system architecture, a useful set of features, and an accurate and generalised traffic classification scheme. The findings from the experimental analysis and evaluation results of the three components provide important insights for researchers to improve the overall performance, scalability, and information sharing ability for building an accurate and early DDoS attack detection system.
Advisors/Committee Members: Welch, Ian, Seah, Winston.
Subjects/Keywords: Anomaly detection; Machine Learning; Software-Defined Networking; ML; SDN
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Koay, A. (2019). Detecting High and Low Intensity Distributed Denial of Service (DDoS) Attacks. (Doctoral Dissertation). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/8069
Chicago Manual of Style (16th Edition):
Koay, Abigail. “Detecting High and Low Intensity Distributed Denial of Service (DDoS) Attacks.” 2019. Doctoral Dissertation, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/8069.
MLA Handbook (7th Edition):
Koay, Abigail. “Detecting High and Low Intensity Distributed Denial of Service (DDoS) Attacks.” 2019. Web. 15 Jan 2021.
Vancouver:
Koay A. Detecting High and Low Intensity Distributed Denial of Service (DDoS) Attacks. [Internet] [Doctoral dissertation]. Victoria University of Wellington; 2019. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/8069.
Council of Science Editors:
Koay A. Detecting High and Low Intensity Distributed Denial of Service (DDoS) Attacks. [Doctoral Dissertation]. Victoria University of Wellington; 2019. Available from: http://hdl.handle.net/10063/8069

Victoria University of Wellington
20.
Palmer, Ben.
Verifying Privacy Preserving Combinatorial Auctions.
Degree: 2009, Victoria University of Wellington
URL: http://hdl.handle.net/10063/867
► Suppose you are competing in an online sealed bid auction for some goods. How do you know the auction result can be trusted? The auction…
(more)
▼ Suppose you are competing in an online sealed bid auction for some goods. How
do you know the auction result can be trusted? The auction site could be performing actions that support its own commercial interests by blocking certain bidders or even reporting incorrect winning prices. This problem is magnified
when the auctioneer is an unknown party and the auctions are for high value items. The incentive for the auctioneer to cheat can be high as they could stand to make a significant profit by inflating winning prices or by being paid by a certain bidder to announce them the winner. Verification of auction results provides
confidence in the auction result by making it computationally infeasible for an auction participant to cheat and not get caught. This thesis examines the construction of verifiable privacy preserving combinatorial auction protocols. Two verifiable privacy preserving combinatorial auction protocols are produced by
extending existing auction protocols.
Advisors/Committee Members: Bubendorfer, Kris, Welch, Ian.
Subjects/Keywords: Verification; Zero knowledge; Auctions
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Palmer, B. (2009). Verifying Privacy Preserving Combinatorial Auctions. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/867
Chicago Manual of Style (16th Edition):
Palmer, Ben. “Verifying Privacy Preserving Combinatorial Auctions.” 2009. Masters Thesis, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/867.
MLA Handbook (7th Edition):
Palmer, Ben. “Verifying Privacy Preserving Combinatorial Auctions.” 2009. Web. 15 Jan 2021.
Vancouver:
Palmer B. Verifying Privacy Preserving Combinatorial Auctions. [Internet] [Masters thesis]. Victoria University of Wellington; 2009. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/867.
Council of Science Editors:
Palmer B. Verifying Privacy Preserving Combinatorial Auctions. [Masters Thesis]. Victoria University of Wellington; 2009. Available from: http://hdl.handle.net/10063/867

Victoria University of Wellington
21.
Stirling, David.
Enhancing Client Honeypots with Grid Services and Workflows.
Degree: 2009, Victoria University of Wellington
URL: http://hdl.handle.net/10063/1177
► Client honeypots are devices for detecting malicious servers on a network. They interact with potentially malicious servers and analyse the Web pages returned to assess…
(more)
▼ Client honeypots are devices for detecting malicious servers on a network. They interact with potentially malicious servers and analyse the Web pages returned to assess whether these pages contain an attack. This type of attack is termed a 'drive-by-download'. Low-interaction client honeypots operate a signature-based approach to detecting known malicious code. High-
interaction client honeypots run client applications in full operating systems that are usually hosted by a virtual machine. The operating systems are
either internally or externally monitored for anomalous behaviour. In recent years there have been a growing number of client honeypot systems being developed, but there is little interoperability between systems because each has its own custom operational scripts and data formats. By creating interoperability through standard interfaces we could more easily share usage of client honeypots and the data collected. Another problem is providing a simple means of managing an installation of client honeypots.
Work ows are a popular technology for allowing end-users to co-ordinate e-science experiments, so these work ow systems can potentially be utilised for client honeypot management. To formulate requirements for management we ran moderate-scale scans of the .nz domain over several months using a manual script-based approach. The main requirements were a system that is user-oriented, loosely-coupled, and integrated with Grid computing|allowing for resource sharing across organisations. Our system design uses Grid services (extensions to Web services) to wrap client honeypots, a manager component acts as a broker for user access, and workflows orchestrate the Grid services. Our prototype wraps our case study - Capture-HPC -with these services, using the Taverna workflow
system, and a Web portal for user access. When evaluating our experiences we found that while our system design met our requirements, currently a Java-based application operating on
our Web services provides some advantages over our Taverna approach - particularly for modifying workflows, maintainability, and dealing with
failure. The Taverna workflows, however, are better suited for the data analysis phase and have some usability advantages. Workflow languages such as Taverna are still relatively immature, so improvements are likely to be made. Both of these approaches are significantly easier to manage and deploy than the previous manual script-based method.
Advisors/Committee Members: Welch, Ian, Komisarczuk, Peter.
Subjects/Keywords: Honeypot (computing); Network security; Security; Distributed systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Stirling, D. (2009). Enhancing Client Honeypots with Grid Services and Workflows. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/1177
Chicago Manual of Style (16th Edition):
Stirling, David. “Enhancing Client Honeypots with Grid Services and Workflows.” 2009. Masters Thesis, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/1177.
MLA Handbook (7th Edition):
Stirling, David. “Enhancing Client Honeypots with Grid Services and Workflows.” 2009. Web. 15 Jan 2021.
Vancouver:
Stirling D. Enhancing Client Honeypots with Grid Services and Workflows. [Internet] [Masters thesis]. Victoria University of Wellington; 2009. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/1177.
Council of Science Editors:
Stirling D. Enhancing Client Honeypots with Grid Services and Workflows. [Masters Thesis]. Victoria University of Wellington; 2009. Available from: http://hdl.handle.net/10063/1177

Victoria University of Wellington
22.
Delwadia, Vipul.
RemoteME: Experiments in Thin-Client Mobile Computing.
Degree: 2009, Victoria University of Wellington
URL: http://hdl.handle.net/10063/1260
► Mobile phones are ubiquitous, however they are vastly underpowered compared to their desktop counterparts. We propose a technique to play potentially resource intensive games over…
(more)
▼ Mobile phones are ubiquitous, however they are vastly underpowered compared to their desktop counterparts. We propose a technique to play
potentially resource intensive games over a network, and provide a prototype system called RemoteME which implements this technique. We
also explore the responsiveness requirement for systems of this nature, establish benchmarks for responsiveness via user studies. We evaluate our
implementation by measuring its responsiveness and comparing it to this benchmark.
Advisors/Committee Members: Marshall, Stuart, Welch, Ian.
Subjects/Keywords: Remote computing; Mobile communication systems; Ubiquitous computing; Client/server computing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Delwadia, V. (2009). RemoteME: Experiments in Thin-Client Mobile Computing. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/1260
Chicago Manual of Style (16th Edition):
Delwadia, Vipul. “RemoteME: Experiments in Thin-Client Mobile Computing.” 2009. Masters Thesis, Victoria University of Wellington. Accessed January 15, 2021.
http://hdl.handle.net/10063/1260.
MLA Handbook (7th Edition):
Delwadia, Vipul. “RemoteME: Experiments in Thin-Client Mobile Computing.” 2009. Web. 15 Jan 2021.
Vancouver:
Delwadia V. RemoteME: Experiments in Thin-Client Mobile Computing. [Internet] [Masters thesis]. Victoria University of Wellington; 2009. [cited 2021 Jan 15].
Available from: http://hdl.handle.net/10063/1260.
Council of Science Editors:
Delwadia V. RemoteME: Experiments in Thin-Client Mobile Computing. [Masters Thesis]. Victoria University of Wellington; 2009. Available from: http://hdl.handle.net/10063/1260
.