Hacker Newsnew | past | comments | ask | show | jobs | submit | 5706906c06c's commentslogin

Why do law firms believe GDPR is a legal issue? Privacy and Security have not been an entirely legal issue, though legal representation is often in the mix when dealing with regulations. I'm curious why GDPR continues to be treated as a legal problem when the regulation is more than clear on its intent and requirements?


Because a lot of detailed questions are not obviously more than clear, especially if you have an interest in not just taking the strictest possible reading, and thus people want legal opinions on that. And where those legal opinions strongly disagree, there'll be legal proceedings to have the courts clarify those.


Fair. I guess it's far more pressing for the Data Controllers versus Processors (my case), so I'll stop with my biased view.


I did a similar implementation in production using DigitalShadows. We basically created "honeywords" in the database at random, and then had DS monitor for those out in the wild. That included random lines in the source code that didn't do anything other than be used as IoCs.


This seems miserable to maintain. I feel like maintaining a code base with code in it that is simply there to be grepped in the wild would make it incredibly messy.


I ran up a $20k+ Macie bill scanning 5 buckets in 24hrs.


What kinds of results did it generate?


A healthy amount of data that looked like PII based on data range, potential secrets in buckets, CSVs, JSONs, Cloudtrail dumps, but also generated reports on dummy data and without fingerprinting of the live data, it wouldn't know what's real or not. The Cloudtrail feature is also useful since it provides user behavior analytics, based on use, etc.


The CT stuff looks interesting, since it's inexpensive, and the other monitoring thing Amazon does (GuardDuty?) is expensive.


It's $4 per 1-million API calls processed. And starts at $1 per GB of logs processed.

Which pricing dimension is of concern?


Not for Github, but the TruffleHog project on GitHub might be of interest to you. There is also SourceClear, which does the same for secrets in GitHub.

Note - AWS is monitoring AccessKey use and API thresholds to keep you informed.


Not the case. I've seen seasoned developers (not to single them out) make simple stupid mistakes with the S3 bucket ACL, Permissions, and Policies. The issue has to do with the sheer laziness of "let's create unstructured data buckets, write once and forget it all" mentality. At some point, this sort of service can be useful in identifying the "crown jewels" within the buckets. Beyond that, the ACL is noAccess by default, so I can't agree with your assertion that AWS is somehow making it difficult to sell more services in favor of vendor lockin.


Just a day or two ago on HN... https://github.com/eth0izzle/bucket-stream


Yes, thank you for linking, but fail to see the correlation. This tool is scanning public HTTPS endpoints based on keywords in its dictionary to discover misconfigured buckets. AWS doesn't manage the bucket Perms/ACL, the customer does. AWS' shared-responsibility model clearly defines all of this. The customer is responsible for the bucket ACL, the same would apply if I ran my stack in a data center and went on to configure Apache/NGNIX with open Directory indexes that allowed anyone to traverse them.


If you have data that matters, it needs dual controls. The idea that a company would place PII on a site publicly accessible and protected only by ACL is ridiculous.

Instead of futzing with machine learning, use network or crypto controls to prevent access, and have a different chain of command manage that access in your company.


Feinstein is one outstanding oligarch.


TL;DR - I'm not saying it isn't real, but I'm not saying it's fake either; http://ualrpublicradio.org/post/100-million-leonardo-da-vinc...


Agree, regulations could deter entrepreneurs from trying since the barrier to entry could be high. Then again, regulations like HIPAA haven't really stopped a slew of Digital Health shops from trying, so it might not be as bad.


Guessing no wife or kids?


Don't let them watch one minute of it, it's all trash.


YouTube has tons of great videos that are awesome for watching with children. My 8-year-old loves the PBS Digital Studios series like Eons and It's Okay to Be Smart, and Crash Course.

Even I like some of the better nursery rhyme videos in foreign languages, for my target language. I hadn't understood that even the good channels repackage their materials into different compilations for the ad views. But for us YouTube is something that is on the TV in the living room and we can all see and agree on the videos we watch. Still it's hard to filter out the filthy language, weirdness, and pointless cruelty.

The problem the author identified exists in other parts of the kid Internet. With Web-based games, sites are aggregating weird and disturbing games with the OK or just junky ones, and kids are sharing these sites with each other.


What part is trauma/abuse? I don't get it.


Practically speaking, all of it. The prolonged effects of handing an iDevice are damaging to their psyche. Kids aren't in a position to exercise rational decisions on what is and isn't acceptable content. The Youtube for Kids content filtering isn't nearly as advanced as it should be, so parents end up spending an inconsiderable amount of time attempting to filter to no success. These videos, along with those "Daddy finger" songs, adults unwrapping toys and "Ryan's toy review" where the little brat gets all the toys and destroys them are mind-numbingly pointless and damaging (considering the lack of value). At some point, parents need to consider the unknown factors and the possibility of (incidental) trauma.

At least, that's the conclusion I've come to after watching my four-year-old consume some of the above. Counter to that, the reduction of screen time has turned her more empathetic toward her sibling, though I can only state that qualitatively.


I still don't see what actual harm is being done here.

I agree that screen/device time is generally bad, but this goes whether it happens to be YTKids, Netflix or games.

What specifically is "wrong with the internet"? I still don't see the issue here.


Did you watch the video where a series of marvel heroes were captured and buried up to their necks in sand? These videos are nightmare fuel for children.


The violence in the superhero video I agree (esp. with the slapstick music) is a bit disturbing. Same with the gorilla daddy finger.

So what is the immediate solution for parents? Block/uninstall YTKids?


Yes. Block it. Turn it off. Try handing them crayons and paper. YTKids is a product begging for your time and attention and that of your kids. It sucks and beyond that may be damaging so stop buying it (with your kid’s attention). Turn it off.


Never let your kids browse the open internet without supervision. Create curated playlists that they can watch when you aren't able to devote 100% of your time to their supervision but still want to occupy their time with videos


Block. If not, then filter the content though I can tell you their content blocking algo is terrible at blocking new content so you end up in a game of cat & mouse.


In another comment someone said that YTKids has an option to disable related videos, that could be a good start.


Yup. Get video apps from actual companies that have at least a semblance of QC.

PBS kids, Disney Jr, Nick Jr.


I think the author's implication is the 'wrongness' is the development of an ecosystem which encourages the creation of sadistic content marketed to children.


Did you read the article and watch the example videos?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: