Decoy Routing, the use of routers (rather than end hosts) as proxies, is a new direction in anti-censorship research. Decoy Routers (DRs), placed in Autonomous Systems, proxy traffic from users; so the adversary, e.g. a censorious government, attempts to avoid them.
It is quite difficult to place DRs so the adversary cannot route around them –-- for example, we need the cooperation of 850 ASes to contain China alone. In this work, we consider a different approach. We begin by noting that DRs need not intercept all the network paths from a country, just those leading to Overt Destinations, i.e., unfiltered websites hosted outside the country (usually popular ones, so that client traffic to the OD does not make the censor suspicious).
Our first question is –-- How many ASes are required for installing DRs to intercept a large fraction of paths from e.g. China to the top n websites (as per Alexa)? How does this number grow with n ? To our surprise, the same few (≈ 30) ASes intercept over 90% of paths to the top n sites worldwide, for n = 10, 20...200 and also to other destinations. Investigating further, we find that this result fits perfectly with the hierarchical model of the Internet; our first contribution is to demonstrate with real paths that the number of ASes required for a world-wide DR framework is small (≈ 30). Further, censor nations’ attempts to filter traffic along the paths transiting these 30 ASes will not only block their own citizens, but others residing in foreign ASes. Our second contribution in this paper is to consider the details of DR placement: not just in which ASes DRs should be placed to intercept traffic, but exactly where in each AS. We find that even with our small number of ASes, we still need a total of about 11, 700 DRs. We conclude that, even though a DR system involves far fewer ASes than previously thought, it is still a major undertaking. For example, the current routers cost over 10.3 billion USD, so if Decoy Routing at line speed requires all-new hardware, the cost alone would make such a project unfeasible for most actors (but not for major nation states).
In this work we present a detailed study of the Internet censorship in India. We consolidated a list of potentially blocked websites from various public sources to assess censorship mechanisms used by nine major ISPs.
To begin with, we demonstrate that existing censorship detection tools like OONI are grossly inaccurate. We thus developed various techniques and heuristics to correctly assess censorship and study the underlying mechanism involved in these ISPs. At every step we corroborated our finding manually to test the efficacy of our approach, a step largely ignored by others. We fortify our findings by adjudging the coverage and consistency of censorship infrastructure, broadly in terms of average number of network paths and requested domains the infrastructure surveils.
Our results indicate a clear disparity among the ISPs, on how they install censorship infrastructure. For instance, in Idea network we observed the censorious middleboxes on over 90% of our tested intra-AS paths whereas for Vodafone, it is as low as 2.5%. We conclude our research by devising our own anti-censorship strategies, that does not depend on third party tools (like proxies, Tor and VPNs etc.). We managed to anti-censor all blocked websites in all ISPs under test.
This work presents a study of the Internet infrastructure in India from the point of view of censorship. First, we show that the current state of affairs - where each ISP implements its own content filters (nominally as per a governmental blacklist) -results in dramatic differences in the censorship experienced by customers. In practice, a well-informed Indian citizen can escape censorship through a judicious choice of service provider.
We then consider the question of whether India might potentially follow the Chinese model and institute a single, government-controlled filter. This would not be difficult, as the Indian Internet is quite centralized already. A few "key" ASes ( 1% of Indian ASes) collectively intercept 95% of paths to the censored sites we sample in our study, and also to all publicly-visible DNS servers. 5,000 routers spanning these key ASes would su ce to carry out IP or DNS filtering for the entire country; 70% of these routers belong to only two private ISPs. If the government is willing to employ more powerful measures, such as an IP Prefix Hijacking attack, any one of several key ASes can censor traffic for nearly all Indian users. Finally, we demonstrate that such federated censorship by India would cause substantial collateral damage to non-Indian ASes whose traffic passes through Indian cyberspace (which do not legally come under Indian jurisdiction at all).
Decoy Routing (DR), a promising new approach to censorship circumvention, uses routers (rather than end hosts) as proxy servers. Users of censorious networks, who wish to use DR, send specially crafted packets, nominally addressed to an uncensored website. Once safely out of the censorious network, the packets encounter a special router (the Decoy Router) which identifies them using a secret handshake, decrypts their content, and proxies them to their true destination (a censored site). However, DR has implementation problems: it is unfeasible to reprogram routers for the complex operations required. Existing DR solutions fall back on using commodity servers as a Decoy Router, but as servers are not efficient at routing, most web applications show poor performance when accessed over DR. A further concern is that the DR has to inspect all flows in order to identify the ones that need DR; this may itself be a breach of privacy for other users (who neither wish Decoy Routing, and nor want to be monitored).
In this work, we present a novel DR system, SiegeBreaker, which solves the above problems using an SDN-based architecture. Unlike previous proposals, where a single unit performs all major operations (inspecting all flows, identifying the Decoy Routing requests, and proxying them), SiegeBreaker distributes the tasks for Decoy Routing among three independent modules. (1) The SDN controller identifies the Decoy Routing requests via a covert, privacy preserving scheme.
(2) The reconfigurable SDN switch intercepts packets, and forwards them to a secret proxy. (3) The secret proxy server proxies the client’s traffic to the censored site. Our modular, lightweight design shows performance comparable to direct TCP operations (even at line rates of 1 Gbps), both in emulation setups, and Internet based tests involving commercial SDN switches.
Existing techniques to measure bandwidth between two hosts require the experimenter to have control over one or both hosts. In this paper, we present Telemetron, the first active bandwidth measurement tool that can estimate the path capacity between two remote hosts, using only an off-path Measuring Machine (MM).
It is possible to cause traffic flow between off-path remote hosts. Sending request packets to one host, with a spoofed source IP (so the packets look like they are from the other host), will cause the first host to send reply packets to the other. The challenge is for MM to measure the rate at which these packets arrive. Our key observation is that if the second machine has a global IP-ID counter, the arrival of packets (or more precisely, the number of replies they cause) can be monitored remotely, using probes from MM. By observing the rate of increment in global IP-ID counter, MM estimates the path capacity between remote hosts. Telemetron shows high accuracy in both laboratory and Internet tests. On an average, the path capacity reported is 92.5% of the theoretical limit.
National governments know the Internet as both a blessing and a headache. On the one hand, it unlocks great economic and strategic opportunity. On the other hand, government, military, or emergency-services become vulnerable to scans (Shodan), attacks (DDoS from botnets like Mirai), etc., when made accessible on the Internet.
How hard is it for a national government to effectively secure its entire cyberspace? We approach this problem from the view that a coordinated defense involves monitors and access control (firewalls etc.) to inspect traffic entering or leaving the country, as well as internal traffic. In several case studies, we consistently find a natural Line of Defense — a small number of Autonomous Systems (ASes) that intercept most (> 95%) network paths in the country. We conclude that in many countries, the structure of the Internet actually makes it practical to build a nation-scale cordon, to detect and filter cyber attacks.
Real-world deployment of mix networks is a challenge and lags behind relatively more recent low-latency systems. Many theoretical results and analysis exist but they do not adequately bridge the gap between theory and practice. One of the main challenges of deployment is deciding on the different mixnet building blocks and if the combination of them provide necessarily the best system in terms of anonymity. The MiXiM framework fills this gap and provides the means to systematically analyse mix network designs from a number of dimensions and supports the mixnetwork adopter to take practical decisions backed with empirical support. This frameworkis flexible and allows one to quickly set-up experiments to investigate a large combinations of mix network building blocks, such as mixing strategies, network topologies, as well as the different parameters related to each component. The framework provides a number of metrics covering the anonymity, end-to-end latency and overheads of mix networks.
Popular instant messaging applications such as WhatsApp and Signal provide end-to-end encryption for billions of users. These applications often rely on a centralized, application-specific server to distribute public keys and relay encrypted messages between the users. As a result, they prevent passive attacks but are vulnerable to some active attacks. A malicious or hacked server can distribute fake keys to users to perform man-in-the-middle or impersonation attacks. While typical secure messaging applications provide a manual method for users to detect these attacks, this burdens users, and studies show it is ineffective in practice. This research presents KTACA, a completely automated approach for key verification that is oblivious to users and easy to deploy. We motivate KTACA by designing two approaches to automatic key verification. One approach uses client auditing (KTCA) and the second KS: method uses anonymous key monitoring (AKM). Both have relatively inferior security properties, leading to KTACA, which combines these approaches to provide the best of both worlds. We provide a security analysis of each defense, identifying which attacks they can automatically detect. We implement the active attacks to demonstrate they are possible, and we also create a prototype implementation of all the defenses to measure their performance and confirm their feasibility. Finally, we discuss the strengths and weaknesses of each defense, the load they impose on clients and service providers, and their deployment considerations.