Most research on internet-frauds has focused on victims' cognitive and personality vulnerabilities and ignored that scammers often have been victims of financial cyber-crimes [1]. These victim-offenders expressed retaliation as a motive to offend and highlight the overlooked emotions and social learning contributing to cyber-scams [2]. A broader understanding of the motives, emotions and knowledge of victim-offenders, solely offenders and solely victims might improve awareness campaigns and security training [3]. In this talk, I use a life-course perspective of social learning [4] to examine media and social sources, prior victimization, and knowledge and attitudes about relationships contribute to committing internet frauds. Data are drawn from two large self-report surveys of victimization and perpetration of a wide range of internet frauds. Deviant friends and family members, mentors, online discussions, and contacts in the dark web increase support for retaliation and provide praise for perpetrating internet-fraud. Those who attended victim support groups and have knowledge of dating app etiquette have more accurate knowledge about suspicious communications on dating apps. Beyond low self-control, psychopathy and committing frauds in the real world, those with higher rates of victimization more often perpetrated cyber-frauds. The life-course perspective suggests a broader view of the emotional and social context of offending might improve the content and focus of awareness campaigns and security training. These campaigns and training often ignore how scammers learn manipulative tactics from friends, family, media, and online sources. This focus also might enhance AI tools to detect and intercept fraudulent messages on dating and social media sites.
Adversarial training has recently emerged as an important defense mechanism to robustify machine learning models in the presence adversarial examples. Although adversarial training can boost the robustness of machine learning algorithms by a margin, research has not been conducted to determine if adversarial training is effective in the long-term. As deployments of machine learning algorithms are characterized by dynamics, change of the underlying model is inevitable. The dynamics are a result of model's evolution over time by introducing new training data and drifting the model by changing its parameters. In this paper, we examine the limitations of adversarial training due to the temporal changes of machine learning models. Using a natural language task, we conduct various experiments using a variety of datasets to measure the impact of concept drift on the efficacy of adversarial training. In particular, our analysis shows that certain adversarially-trained models are even more prone to the drift than others. In particular, WordCNN and LSTM-based models are shown more susceptible to the temporal changes than others such as BERT. We validate our findings using multiple real-world datasets on different network architectures. Our work calls for further research into the temporal aspects of adversarial training.
Malware is one of the serious computer security threats. To protect computers from infection, accurate detection of malware is essential. At the same time, malware detection faces two main practical challenges: the speed of malware development and their distribution continues to increase with complex methods to evade detection (such as a metamorphic or polymorphic malware). This research utilizes various characterizing features extracted from each malware using static and dynamic analysis to build seven machine learning models to detect and analyze packed windows malware. We use a large-scale dataset of over 107,000 samples covering unpacked and packed malware using ten different packers. We examined the performance of seven machine learning techniques using 50 dynamic and static features. Our results show that packed malware can circumvent detection when a single analysis is performed while applying both static and dynamic methods can help improve the detection accuracy around 2% to 3%.
Online services that provide books, music, movies, etc., for free have existed on the Internet for decades. While there are some common beliefs and warnings that such online services may contain hidden security risks, many ordinary users still visit such websites, making them a convenient vehicle for subsequent exploitation. In this paper, we investigate and quantify through measurements the potential vulnerability of such free content websites (FCWs). For this purpose, we curated 834 FCWs offering books, games, movies, music, and software. For a comparison purpose, we also sampled a comparable number of premium content websites, where users need to pay for using the service for the same type of content. For our modality of analysis, we use SSL certificates. Namely, we explore SSL certificates' structural and fundamental differences between free and premium content websites. Through our analysis, we unveil that 36% of the free websites' certificates have major issues, with 17% invalid certificates, 7% expired, and 12% with mismatched domain names. Moreover, although surprisingly, we uncover the usage of ECDSA predominantly among the free websites. Among other observations, we notice that 38% of the FCWs use ECDSA-256, compared to only 20% of their premium counterparts, which provides better security guarantees (and performance) than the common algorithm option and key size (RSA-2048) in premium websites. Our observations raise concerns regarding the safety of using such free services from a transport standpoint and call for in-depth analysis of their risks.
The Address Resolution Protocol (ARP) has a critical function in the Internet protocol suite, however, it was not designed for security as it does not verify that a response to an ARP request really comes from an authorized party. This weak point in the ARP protocol can be used by adversaries to send spoofed ARP messages onto a Local Area Network (LAN) and poison victim's ARP cache table. Once an attacker succeeds in an ARP spoofing attack, they can perform a man-in-the-middle (MITM) attack to relay or modify the data or launch a denial-of-service (DoS) attack. Thus, it is crucial to detect and counter an ARP cache poisoning attack. A few research works have proposed solutions against this attack. In the literature, we reviewed these works to see how effective these security solutions are in detecting and countering ARP cache poisoning attack. We observed that suggested mechanisms are not efficient enough in detecting and mitigating this attack. To this end, this paper proposes a distributed algorithm to instantly detect ARP cache poisoning attack and discover information about host(s) used by the attacker and finally counter this attack using acquired information. We implemented a prototype of the proposed algorithm, called agent, to be run on every host within the network. Agents communicate each other to orchestrate a distributed security tool to detect and mitigate ARP cache poisoning attack.
A home is an emotional investment, a retreat, a safe space. The various elements of a home create an incomparable experience from other physical spaces. Alongside these elements, a home instills belonging and familiarity, creating a bond between the boundaries of a home and the individual. However, this bond may be destroyed through the intrusion of uninvited individuals, and this intrusion is fueled by the lack of security in smart home devices. Although they provide convenience, smart home devices are capable of being breached for various reasons, some being due to vulnerable sensors, faulty data protection mechanisms, and vulnerability to malware and flaws. In this paper, we analyze a smart home based on the theory of territoriality. By incorporating the theory of territoriality, our goal is to analyze how cyberattacks on smart devices can disrupt an individual's experience of a home.