Master Theses & Projects (FBIT)

Browse

Recent Submissions

Now showing 1 - 6 of 6
  • Item
    Filtering honeywords using probabilistic context free grammar
    (2023-10-01) Tanniru, Alekhya; Vargas Martin, Miguel
    With the growing prevalence of cyber threats, effective password policies have become crucial for safeguarding sensitive information. Traditional password-based authentication techniques are open to a number of threats. The idea of honeywords, which was developed to improve password-based security, entails using dummy passwords with real ones to build a defence mechanism based on deceit. The importance of password policies is examined in the context of honeywords in this study, emphasizing how they might improve security and reduce password-related risks. We present the idea of using the existing passwords to extract a policy and using this policy to filter good and strong passwords. Through this capstone project, we aim to contribute to the broader understanding of honeywords and their role in improving password-based authentication systems. I have conducted experiments on Chunk-GPT3 and GPT 4 models, to see which one of the models produces more honeywords which are very similar to the real passwords.
  • Item
    Enhancing password security: a quest for optimal honeywords
    (2023-10-01) Nety, Meher Viswanath; Vargas Martin, Miguel
    In this capstone report, our primary focus is on harnessing the capabilities of the GPT4 model to enhance password security through the generation of honeywords. Honeywords are decoy passwords designed to strengthen the security of sensitive systems by confusing potential attackers. The utilization of GPT4, a powerful language model developed by OpenAI, offers a n innovative approach to this challenge. By directly generating honeywords without relying on password segmentation, GPT4 introduces a unique dimension to password security. This approach is particularly valuable in thwarting targeted attacks, as honeywords generated by GPT4 are designed to deceive potential attackers effectively. In addition to the exploration of GPT4, this report also delves into the realm of Chunk-GPT3. Chunk-GPT3, as detailed in previous research, employs advanced language models to generate honeywords through the segmentation of passwords into discrete chunks. These chunks are ingeniously recombined to form decoy passwords. The re-engineered Chunk-GPT3 approach incorporates enhancements to the password segmentation process, including ”mapping digits to alphabets” and ”removal of digits” functions. These modifications aim to produce more potent and effective honeywords, ultimately elevating password security. The report includes a comprehensive comparative analysis of honeywords generated by the original Chunk-GPT3 approach and the re-engineered Chunk GPT3 approach, as well as honeywords created by GPT4. By assessing the effectiveness of these honeyword generation methods using the HWSimilarity metric, the report provides valuable insights into the strengths and weaknesses of each approach. Examining the capabilities of both GPT4 and Chunk-GPT3 in the context of honeyword generation, this report aims to provide a holistic perspective on cutting-edge strategies for safeguarding sensitive data in the ever-evolving digital landscape.
  • Item
    Guarding the gate: using honeywords to enhance authentication security
    (2023-10-01) Koppada, Gowtham; Vargas Martin, Miguel
    A honeyword (false password) can be defined as a duplicate password (rearranged) resembling the same characteristics of the original password. It is very challenging for any cyberpunk to distinguish between a real password and honeyword (containing PI). Using HGT’s (honeyword generation technique), these honeywords are generated in lump sum and the hashed honeywords are placed in an organization database with triggers to identify breach before it’s too late. In accordance with the previous research, the concept of HGT’s might fail if the generated honeywords does not contain the personal information of the user, making it easy for the attacker to perform targeted attack. It is a good practice to include the chucks containing PI or part of the original password of that particular user in generated honeywords to make it look natural. In order to generate such honeywords with chunks, the concept of prompt engineering in LLM (Large Learning Models) is used. In this report, we tried to improve the existing prompt, making it easy for the LLM to get deep understanding and to produce better throughput. In addition to that, we compared the base GPT Learning model (existing) with the newly upgraded GPT models like GPT-3.5-turbo and GPT-4. Considering the ‘strength of password‘ as a base factor, we came up with results and statements stating which model outperformed the others.
  • Item
    Enhancing password security: advancements in password segmentation technique for high-quality honeywords
    (2023-07-01) Sannihith Lingutla, Satya; Vargas Martin, Miguel
    Passwords play a major role in the field of network security and play as a first line of defense against attackers who gain unauthorized access to the profiles. However, passwords are vulnerable to various types of attacks making it essential to ensure that they are strong, unique, and confidential. One of the major techniques that evolved over time to enhance password security is the use of honeywords that are decoy passwords designed to alert the administrator when a data breach has happened. The main goal of this project is to addresses one of the limitations of a honeyword generation technique, called Chunk-GPT3, by performing better password segmentation through a re-engineered chunking algorithm that maps digits into characters, and which would seem to lead to better honeywords. We justify our re-engineering method and generate honeywords that we compare to those generated by Chunk-GPT3. Nonetheless, after evaluating honeywords using the HWSimilarity metric, the results suggest that improved chunking does not necessarily lead to better honeywords in all cases.
  • Item
    Matching expectations and reality in AI systems - cybersecurity use case
    (2023-04-01) Defo Aymar, Tala; Lewis, Peter
    Artificial intelligence (AI) is a growing field in computer science which develops intelligent systems capable of performing things that a human mind can do. The manufacturers of security systems integrate AI capabilities into their systems for threat hunting, and market them with an emphasis on AI used to provide security features. This study evaluates the expectations of marketed AI features with reality in a use case of a cybersecurity system. To this end, we evaluated a system in a real-live environment with huge amount of data sent to it for analysis. Our evaluation demonstrates that, first, the virtual security analyst feature provided by the system cannot replace a human security analyst as it can only perform 3 amongst the 8 tasks of a human security analyst. Secondly, marketing claims exaggerate regarding the features provided by AI in the system.
  • Item
    Systems and models for secure fallback authentication
    (2018-12-01) Addas, Alaadin; Thorpe, Julie
    Fallback authentication (FA) techniques such as security questions, Email resets, and SMS resets have significant security flaws that easily undermine the primary method of authentication. Security questions have been shown to be often guessable. Email resets assume a secure channel of communication and pose the threat of the avalanche effect; where one compromised email account can compromise a series of other accounts. SMS resets also assume a secure channel of communication and are vulnerable to attacks on telecommunications protocols. Additionally, all of these FA techniques are vulnerable to the known adversary. The known adversary is any individual with elevated knowledge of a potential victim, or elevated access to a potential victim's devices that uses these privileges with malicious intent, undermining the most commonly used FA techniques. An authentication system is only as strong as its weakest link; in many cases this is the FA technique used. As a result of that, we explore one new and one altered FA system: GeoPassHints a geographic authentication system paired with a secret note, as well as GeoSQ, an autobiographical authentication scheme that relies on location data to generate questions. We also propose three models to quantify the known adversary in order to establish an improved measurement tool for security research. We test GeoSQ and GeoPassHints for usability, security, and deployability through a user study with paired participants (n=34). We also evaluate the models for the purpose of measuring vulnerabilities to the known adversary by correlating the scores obtained in each model to the successful guesses that our participant pairs made.