About

Feeds

 

〰 Tidal, Archiloque's feed reader

Hacker NewsKalashnikov’s new autonomous weapons and the “Terminator conundrum”
Comments
Planet PostgreSQLHans-Juergen Schoenig: Finding patterns in timeseries: A poor man’s method

A lot has been written about timeseries analysis and handling temporal data in general. Countless papers outlining various strategies have been posted and published all over the internet. However, in many cases a lot of the real technology is hidden behind colorful marketing papers without real meaning and without any useful content. Still: Analyzing timeseries […]

The post Finding patterns in timeseries: A poor man’s method appeared first on Cybertec - The PostgreSQL Database Company.

Hacker NewsAm I a bad developer?
Comments
Hacker NewsThe Batman Killer – a prescription for murder?
Comments
Hacker NewsSandsifter: find undocumented instructions and bugs on x86 CPU
Comments
Hacker NewsLife Inside Hong Kong’s ‘Coffin Cubicles’
Comments
Lambda the UltimateHappy Birthday, dear Lambda: 17 is good edition

Seventeen years ago to the day, LtU was born. I guess it's about time I stop opening these birthday messages by saying how remarkable this longevity is (this being the fate of Hollywood actresses over 25). Still, I cannot resist mentioning that 17 is "good" (טוב) in gematria, which after all is one of the oldest codes there are. It's is very cool that the last couple of weeks had a flurry of activity. This old site still got game.

I will not try to summarize or pontificate. The community has grown too big and too unruly for that and I have been more an absent landlord recently than a true participant (transitioning from CS to being a professional philosopher of science and having kids took a bit more of my free time than I expected).

One thing I always cherished about LtU was that we welcomed both professional, academic work, and everything and anything that was cool and fun in programming languages. It was never a theory only site. So here's a little birthday party game instead of a long summary.

Which new (or old) languages inspire you to think that a good language can smoothly allow people to reach heretofore hard to reach semantic or abstraction levels?

I mean things that affect how the little guy thinks, not formal semantics, category theory, or what have you.

I'll start with two unconventional languages that I have had the pleasure (and exasperation) of using recently. Both are in some respects descendants of Logo, the language through which I was introduced to programming when I was a ten year old child in Brookline. They are NetLogo and ScratchJr.

NetLogo is a language for building agent based models (here's a classic ABM for you to enjoy; if you install NetLogo there's an implementation in the model library). While some aspects of the language semantics (and syntax) are irritating, NetLogo is very good at what it does. I may say more in the comments, but the key is that a simulation consists of multiple agents, who can move and interact, and the language makes building such simulations straightforward. There is a central clock; you can address multiple agents using conditions ("ask guys with [color = red] [die]"); implicitly have code run by each agent etc. In fact, you hardly think about these issues. If you have no previous background in programming, it feels natural that keeping track of these and other details is not part of the task programming.

ScratchJr is for young kids. It allows them to create little animated scenes, which may be responsive to touch and so on. You can record sounds and take pictures and use them as first class elements in your animations. But the nice thing for me was to notice how natural it is for kids to use the event-driven model (you can "write" several bits of code, and each will execute when the triggering event happens; no need to think about orchestrating this) as well as intuitively understand how this may involves things happening concurrently. These things just emerge from the way the animations are built, they are not concepts the programmer has to explicitly be aware of (which is a good thing, considering she is typically a five year old).

When I see philosophy students writing netlogo and reasoning about the behavior of the agents and when I see kids playing with ScratchJr, I am reminded why I found this business of "engineering abstractions" so enticing when I first used structured programming to design vocabulary for the program I was writing and when I heard some language described as a language for stratified design.

So which are your nominations for cool language based abstractions, for the little guy? Just to give us all some motivation and maybe get me worked up enough to finally delve into algebraic effects?

Happy birthday, LtU-ers. Keep fighting the good fight!

Le Clavier CannibaleDe l'embonpoint des livres (et d'un désormais fatal apanage)
(A very big bouc…)
L'été, le Clavier Cannibale, trop occupé à sucer la moelle des livres à venir, ne s'embête pas: il recycle, c'en est presque écolo. Voici donc ce qu'on publiait le 15 novembre 2013 —


Dans un article paru récemment sur le site Salon, Laura Miller s'interroge sur les "longs livres", ces béhémoths qui seraient selon elle la hantise des critiques littéraires. N'ayant rien à dire d'intéressant, elle en arrive à ce double constat: quand un long livre est bon, c'est super; quand il est mauvais (ou difficile à lire), c'est fichu. Donna Tartt, oui; Thomas Pynchon, non. Je schématise à peine l'indigence de son propos. Bon, il faut dire que pour elle, Docteur Sleep, de Stephen King est un "gros" livre. On n'ose imaginer sa réaction si on l'enfermait dans une cave avec Jérusalem d'Alan Moore.

Il y a trois ans, c'était le jeune écrivain débutant Garth Risk Hallberg qui, sur le site The Millions, se penchait sur la même question. Son e-papier est un peu plus intéressant. D'abord parce qu'il rappelle les raisons contextuelles qui expliquent longtemps l'existence de "longs romans" (ou "big books"): la parution en feuilletons, dont le roman victorien est l'exemple par excellence. Ensuite parce qu'il soulève un paradoxe lié à notre époque: la profusion actuelle des "gros livres" se heurterait aux troubles déficitaires de l'attention croissant qui sont notre DFA (désormais fatal apanage). Mais Hallberg remarque néanmoins que plusieurs mammouths de papier on réussi à franchir le rubicon de la critique et les alpes du lectorat: Littell et ses Bienveillantes, Bolaño et son 2666, Chris Adrien et The Children's Hospital, Wallace et Infinite Jest, etc. Hallberg postule également que, rapport qualité/poids, le lecteur fait franchement une affaire. Imperial de Vollmann serait plus "rentable" que tel petit opus de Mario Bellatin. Enfin, et surtout, lire de longs livres c'est, toujours pour Garth, "entrer en résistance". Il faut dire que Garth Risk Hallberg prêche pour sa paroisse: il vient en effet de terminer un livre de 900 pages  – City on Fire – dont les droits ont été achetés 2 millions de dollars par l'éditeur américain Knopf. Mais attendons de lire la chose avant de nous réfugier dans le moelleux cocon de nos troubles déficitaires de l'attention…

Bref, le débat sur la taille des livres est finalement assez vain. Mais il est révélateur. Pour la critique, la notion de "forme" n'est plus structurelle mais pondérale. On voit déjà venir le jour où on vous demandera: "Alors, le nouveau livre de X, il est en forme?" ou "Dis donc, il aurait pas un peu maigri, le recueil de nouvelles de Y?" ou "Je serais le livre de W, je ferais attention: il a pris un peu trop de pages ces derniers temps", ou "T'as lu le bouquin de S ? Il entre même plus en librairie depuis qu'il se bourre de flux de conscience", ou, "Elle devrait suivre un régime, la saga de F."

Heureusement, tout le monde sait que lire c'est faire de l'exercice…




Hacker NewsA Closer Look at Swift Playgrounds for iPad
Comments
Hacker NewsMetaclasses: Thoughts on generative C++
Comments
Les Crises“Trump met fin à un programme secret de la CIA visant à armer des rebelles anti-Assad en Syrie, une mesure que souhaitait Moscou”

Source : The Washington Post, Greg Jaffe & Adam Entous, 19-07-2017

Prenant ainsi une décision qui témoigne de sa volonté de collaborer avec la Russie, le président Trump a décidé de mettre un terme à un programme secret de la CIA qui soutenait les rebelles syriens en lutte contre le président Bachar al Assad.

Le président Trump a décidé de mettre un terme à un programme secret de la CIA visant à armer et à entraîner les rebelles modérés syriens en lutte contre le président Bachar al Assad, prenant ainsi une décision qui, selon les responsables états-uniens, était souhaitée par Moscou.

Ce programme était l’élément essentiel d’une politique commencée par l’administration Obama en 2013 et destinée à exercer une pression sur Assad pour le forcer à démissionner. Cependant même les partisans de ce programme mettaient en doute son efficacité depuis que la Russie avait déployé ses forces en Syrie deux ans après.

Lire la suite

Hacker NewsWhy we should learn German
Comments
Les CrisesJacques Morel : « Le nouveau chef d’état-major défendait les auteurs du génocide rwandais »

Source : L’Humanité, Lola Ruscio, 21-07-2017

Le 20 juillet, Emmanuel Macron aux côtés du général François Lecointre, lors d’une visite de la base militaire d’Istres . Jean-Paul Pelissier/Reuters

Hacker NewsFlush times for hackers in booming cyber security job market
Comments
QC RSSA World Of Pleasure And Pain




Ads by Project Wonderful! Your ad could be here, right now.

Hacker NewsAn Update from Coinbase Regarding Bitcoin Cash (BCC)
Comments
Hacker NewsSamsung ends Intel's 2-decade-plus reign in microchips
Comments
Hacker NewsFormed by Megafloods, This Place Fooled Scientists for Decades
Comments
Hacker NewsSpaceX Is Now One of the World’s Most Valuable Privately Held Companies
Comments
Hacker NewsScala 2.12.3 Released (Significant Compiler Speedup)
Comments
Hacker NewsHigher-paid, faster-growing tech jobs are concentrating in 8 US hubs
Comments
Ars TechnicaHow a podcaster managed to confront his tech support scammer, in person

Enlarge / This November 2015 photo appears to be a company photo of Accostings, which Reply All identified as an India-based tech support scam company. Kamal Verma is standing in a black shirt with a watch in the center of the photo. (credit: Kamal Verma)

The following post contains spoilers of Reply All episode #102: Long Distance, which was released on July 27, 2017. If you don't wish to know what happens in that episode, read no further.

Here at Ars, we are no strangers to online tech support scammers. For years now, we have played along with scammers, cajoled them, and called them out on their tricks. Such scams are notoriously difficult to shut down.

But we never even dreamed of doing what the podcast Reply All has done in an amazing episode that was released Thursday morning: doggedly pursue corporate records, find Facebook profiles of at least one company executive, and even manage to have extended conversations with one of them before trying to confront him. In person. In India.

Read 93 remaining paragraphs | Comments

roguelike developmentFAQ Fridays REVISITED #18: Input Handling

FAQ Fridays REVISITED is a FAQ series running in parallel to our regular one, revisiting previous topics for new devs/projects.

Even if you already replied to the original FAQ, maybe you've learned a lot since then (take a look at your previous post, and link it, too!), or maybe you have a completely different take for a new project? However, if you did post before and are going to comment again, I ask that you add new content or thoughts to the post rather than simply linking to say nothing has changed! This is more valuable to everyone in the long run, and I will always link to the original thread anyway.

I'll be posting them all in the same order, so you can even see what's coming up next and prepare in advance if you like.


THIS WEEK: Input Handling

Translating commands to actions used to be extremely straightforward in earlier console roguelikes that use blocking input and simply translate each key press to its corresponding action on a one-to-one basis. Nowadays many roguelikes include mouse support, often a more complex UI, as well as some form of animation, all of which can complicate input handling, bringing roguelikes more in line with other contemporary games.

How do you process keyboard/mouse/other input? What's your solution for handling different contexts? Is there any limit on how quickly commands can be entered and processed? Are they buffered? Do you support rebinding, and how?


All FAQs // Original FAQ Friday #18: Input Handling

submitted by /u/Kyzrati
[link] [comments]
Hacker NewsAdventures in Outsourcing: Cooking with TaskRabbit
Comments
Hacker NewsState of Elm 2017 Results – Brian Hicks
Comments
Les CrisesNos respects mon général, par Richard Labévière

Source : Proche & Moyen Orient, Richard Labévière, 24-07-2017

On ne peut pas dire tout et son contraire et faire l’inverse de ce qu’on dit… Baisser les impôts et commencer par augmenter la CSG, casser le code du travail et prétendre l’améliorer, annoncer une augmentation du budget de la Défense – pour atteindre 2% du PIB en 2025 – et la veille du 14 juillet décréter une coupe sèche de 850 millions d’euros et un gel de 2,7 milliards, soit près de 10% du budget global de nos armées.

Les coupes annoncées touchent, en premier lieu, le régalien : 47% des annulations, soit 1,4 milliard d’euros entre la Défense, l’Intérieur et le ministère des Affaires étrangères. Défense, lutte anti-terroriste et relance diplomatique sont pourtant les thèmes avancés par le nouveau président de la République comme des priorités nationales. Or ce sont leurs crédits que l’on casse en premier dans la précipitation d’un effet d’annonce ! Il y a clairement une incohérence de fond entre la réalité et celle que le Président de la République distille sur son Smartphone…

Pour financer les 850 millions de coupes, la Direction générale de l’armement (DGA) va classiquement se tourner vers les industriels pour renégocier des prix à la baisse et des livraisons de matériels différées. Mauvaise nouvelle pour les industriels qui emploient directement 165 000 personnes sur le territoire national. Vont souffrir évidemment les emplois et la sous-traitance… La liste des programmes concernés devrait être connue d’ici la fin de l’été, mais on sait déjà ceux qui trinqueront les premiers : annulation de 360 blindés pour l’armée de terre, filière aérospatiale ciblée, livraisons des frégates intermédiaires et des avions ravitailleurs repoussées…

Dans la Marine nationale, la durée des frégates devra passer de 25 à plus de 40 ans de service. Alors que les élites françaises se vantent, presque quotidiennement, de disposer du deuxième espace maritime mondiale avec quelques 11 millions de km2 de ZEE (zone économique exclusive), « la Marine nationale voit le nombre de ses patrouilleurs outre-mer s’effondrer : d’ici à 2020, six sur huit auront été désarmés, aucune assurance sur le calendrier de leur remplacement n’étant fixé », commente un officier général qui ajoute : « notre pays ne dispose toujours pas d’une stratégie maritime adaptée aux intérêts du pays, encore moins de stratégie navale ».

Lire la suite

Hacker NewsWhat are potential disadvantages of functional programming?
Comments
Hacker NewsGE’s Jeffrey Immelt Is on Uber's CEO Shortlist
Comments
Hacker NewsAge of Empires: Definitive Edition Beta
Comments
Hacker NewsWhat is Windows doing while hogging that lock?
Comments
Ars TechnicaPolice body cam footage of man tased in back prompts $110K settlement

(video link)

Body cam footage of an Aurora, Colorado, cop tasing an unarmed black man in the back paved the way for the city to pay $110,000 to settle police abuse allegations, the man's lawyers told Ars Thursday.

Footage from September's tasing shows two black men being questioned by police who are responding to a weapons incident at a nearby apartment building. One of the men is seen and overheard on the video demanding to know why he's being questioned. "For what... ?" he says.

Read 8 remaining paragraphs | Comments

Hacker NewsSlack's 404 page
Comments
Ars TechnicaSpecific area of the brain helps keep the body young

Enlarge (credit: Daniele Meli)

Age may not be a state of mind, but the brain is definitely involved. That's the conclusion of a study published on Wednesday in the journal Nature, which provides compelling evidence that a specific structure in the brain, called the hypothalamus, plays a significant role in controlling the entire body's aging. The results suggest stem cells play a critical role, but only in part via their ability to generate new neurons.

The results come from researchers at the Bronx's Albert Einstein College of Medicine. They, along with several other labs, have generated evidence that suggested the hypothalamus played a key role in aging. That makes a certain amount of sense: aging is a systemic process, and the hypothalamus contains structures like the pituitary that release hormones that influence the entire body. And there's already been some indications that factors that control the dynamics of aging end up circulating through the blood.

Aging and stem cells

But what controls the timing of aging? One intriguing possibility is that neural stem cells are involved. These stem cells continue to divide and produce new neurons even after the brain is fully developed, but their numbers appear to go down over time (possibly because more of them produce new neurons than are replaced by cell divisions). If the key factors are produced by neural stem cells, then their levels should go down over time, allowing aging to proceed.

Read 8 remaining paragraphs | Comments

Hacker NewsMRelief (YC W16) Is Hiring a Rails Dev in Chicago
Comments
Hacker NewsKerbal Space Oddities
Comments
Hacker NewsA survey of BSD kernel vulnerabilities (DEF CON) [pdf]
Comments
Hacker NewsThe Deal That Jeff Bezos Got on Basecamp
Comments
Hacker NewsAmazon Hub
Comments
Hacker NewsAsk HN: What are the 5 websites you visit almost daily?
Comments
Hacker NewsI’ve Had a Cyberstalker Since I Was 12 (2016)
Comments
Ars TechnicaAs dominance of launch market looms, SpaceX now valued at $21 billion

Enlarge / Maye Musk and Elon Musk attend the 2017 Vanity Fair Oscar Party in Beverly Hills. (credit: Taylor Hill/Getty Images)

After two serious accidents in 2015 and 2016, SpaceX has been on a tear in 2017 with 10 successful launches, including the historic re-flight of two used boosters and a used Dragon spacecraft. These achievements suggest the company is well on its way toward developing low-cost, reusable boosters, and therefore the rocket company founded by Elon Musk may be on the cusp of capturing much of the global launch market.

A new valuation appears to back up this optimism. According to the New York Times, SpaceX recently raised $350 million in additional funding, and during this process the company was valued at $21 billion. This represents a significant increase from 2015, when Google and Fidelity invested $1 billion in SpaceX, valuing the company at $12 billion.

The new report notes that the updated value of SpaceX places the company in rarefied air, as just six other venture-backed companies are valued at $20 billion or more around the world. These companies include US-based companies Uber, Airbnb, Palantir, and WeWork, as well as Chinese firms Didi Chuxing and Xiaomi.

Read 3 remaining paragraphs | Comments

Hacker NewsGoogle Home Is 6 Times More Likely to Answer Your Question Than Amazon Alexa
Comments
roguelike developmentROMVLVS, a Civ/SimCity crossover

Hey there! For the past few years I've been developing ROMVLVS, a Civ/SimCity crossover built using Libtcod/Python. (That makes it a roguelike, right? :)

Here are some screenshots with a bit of an explanation: http://imgur.com/a/o8gvO

The engine is quite simple: production of resources (human, food, wood) is governed by differential equations (sort of) which are modified by the environment. Some dispersion takes place to distribute resources over the map.

I don't have a build for download that reflects these screenshots, but you can look in the development log for some older builds (Linux only for now): http://romvlvsgame.tumblr.com/

Hope you like it, and I think this is the start of a beautiful friendship!

submitted by /u/CaptainHennessey
[link] [comments]
Hacker NewsShow HN: Hacking an ultrasound probe with a raspberry and low-cost OS hardware
Comments
Hacker NewsIt looks like the state of California is bailing out Tesla
Comments
Ars TechnicaWindows 10 Creators Update now available to all, November Update end-of-life’d

Enlarge / The announcement of the Creators Update in October 2016. (credit: Ars Technica)

Some four months after its initial release, Microsoft says it has opened the floodgates and is now pushing out Windows 10 version 1703, the Creators Update, to every compatible PC (a category that excludes systems using Intel's Clover Trail Atoms).

Earlier this month, AdDuplex, which tracks the penetration of the different Windows 10 versions, reported that as of July 18, the Creators Update had just passed 50 percent of Windows 10 systems. Forty-six percent are on the previous version, 1607 (aka the Anniversary Update).

Until now, the deployment of the Creators Update has been throttled to stage its rollout. That throttle is now removed, so most of that 46 percent should now start upgrading. Microsoft is also saying that with this full rollout, enterprise customers should have confidence deploying the update. With Microsoft getting rid of the "Current Branch" and "Current Branch for Business" nomenclature, this is the closest thing to a signal that the version is enterprise-ready.

Read 1 remaining paragraphs | Comments

Hacker NewsWhere’s all my CPU and memory gone? The answer: Slack
Comments
Hacker NewsGilad Bracha – Composing Software in an Age of Dissonance
Comments
Hacker NewsTech is the best industry for women (from a female software engineer)
Comments
Ars TechnicaTiny pillars put light and sound in a quantum superposition

Enlarge / It's now possible to precisely fabricate very small pillars. (credit: Stanford University)

In recent years, there has been a lot of interest in coupling sound and light together. Admittedly, we've been doing this for a long time, but we've always been limited in terms of what we can do by the ways that nature puts materials together. Now, with our ability to construct structures that are the right size, we can make devices that really dance to the tune that we give them.

This control has been demonstrated in a very cute way recently. Researchers have put together micro pillars that convert light into long-lasting, very high-frequency sound waves.

Nature leads the way

Nature, of course, allows sound and light to play together in different ways. For instance, if a gas absorbs light, it will heat up and expand, so flashing a light into a gas will generate a sound wave at the frequency of the flashing. One of the most sensitive techniques for measuring how materials absorb light makes use of this.

Read 16 remaining paragraphs | Comments

Planet PostgreSQLCraig Kerstiens: Database Table Types with Citus and Postgres

Citus is Postgres that scales out horizontally. We do this by distributing queries across multiple Postgres servers—and as is often the case with scale-out architectures, this scale-out approach provides some great performance gains. And because Citus is an extension to Postgres, you get all the awesome features in Postgres such as support for JSONB, full-text search, PostGIS, and more.

The distributed nature of Citus gives you new flexibility when it comes to modeling your data. This is good. But you’ll need to think about how to model your data and what type of database tables to use. The way you query your data ultimately determines how you can best model each table. In this post, we’ll dive into the three different types of tables in Citus and how you should think about each.

Distributed tables

We’re going to start with a distributed table, which is the type of table you probably think of first when you think of Citus.

In Citus, a distributed table is a table that is spread across multiple server nodes. We do this by ‘sharding’ the database into multiple smaller tables, called ‘shards’, that we then distribute across different physical nodes.

For example, if you choose to create a Citus distributed table such as an orders table with a citus.shard_count of 48, then your database would be split into 48 shards (i.e. 48 smaller tables)—and then if you had a 2-node cluster then you would have 24 shards on one node and 24 shards on the other. (If you’re curious to learn more about sharding Postgres databases, there is a primer just for you.)

Sample orders sharded structure

With a distributed table, you do need to specify the key you’re going to shard your data by. The good news is that SaaS applications often have some natural distribution key. And when you have a natural distribution key for your tables, it can make sense to shard all or most of your tables by this key, so that your data is co-located.

If you manage an application like Shopify or Etsy—or if you develop a B2B app like Salesforce.com or Marketo—the data for each of your customers is unique and separate. As a result, for the most part, data from one of your customers will not need to interact with data from your other customers.

Because each of your customers (‘tenants’) has data that can be kept separate from each other, the tenant_id or customer_id makes a natural distribution key on which to shard the data. We call these types of applications ‘multi-tenant’ applications and find that many SaaS businesses have created ‘multi-tenant’ applications.

So for these types of SaaS applications, when you specify a distribution key such as tenant_id or customer_id, Citus will:

  1. Take a hash value of that distribution key
  2. Identify the shard that the hash value lives in
  3. Route the data to that shard

With a distributed table in Citus, your data is spread out across multiple nodes, so that your application can benefit from more servers, more cores, more memory. And with our shard rebalancer, Citus makes it easy to “rebalance” how the shards are spread across the different server nodes when you add more nodes to your Citus cluster. Shard rebalancing with Citus helps you optimize the performance of your distributed database.

And with distributed tables, all tables that are sharded by a key such as customer_id can be easily joined with other tables that are sharded on the same distribution key.

Reference tables

In the example of a Shopify-type app, there are some tables that you would definitely want to set up as distributed tables, such as products, orders, and line_items.

But what about smaller tables that may relate to all of your customers? Perhaps you have a categories field? Maybe there is a small lookup table for order_statuses?

With tables that relate to all of your customers, you’ll want to be able to join them in an efficient way. Which brings us to the topic of reference tables. Reference tables are tables that are placed on all nodes in a Citus cluster. Instead of being tied to the shard count, we keep one copy of the reference table on each node (as reference!) and which you can then join against.

When interacting with a reference table we automatically perform two phase commit on transactions. This means that Citus makes sure your data is always in a consistent state, regardless of whether you are writing, modifying, or deleting.

Reference tables are handy for tables that are the same for all customers, frequently joined against, often smaller (under 10 million rows), and less frequently updated.

Standard Postgres tables

Because Citus is an extension to Postgres, we hook into the core Postgres APIs and adjust query plans as necessary. This means that when you use Citus, the coordinator node you’re connecting to and interacting with is also a Postgres database. If you have tables that don’t need to be sharded because they’re small and not joined against, you can leave these as standard Postgres tables.

To create standard Postgres tables, you don’t have to do anything extra, since a standard Postgres table is the default—it’s what you get when you run CREATE TABLE. In almost every Citus deployment, we see standard Postgres tables co-exist with distributed tables and reference tables. The most common standard Postgres table is the users table which you use for user login and authentication, though we often see standard Postgres tables used for a mix of administrative type tables as well.

In Review: Three Types of Tables

We hope as you dig into Citus this guide gives you some good baselines for how to map your own tables in your application to the various table types in Citus. As quick review:

Which Citus table type is right for you?

If you have any questions about how to scale out your Postgres database with Citus or how to model your data for a distributed database, we’re happy to help. We try to make it easy to adopt Citus too: Citus is available as open source, as on-prem software, and as a fully-managed database service on AWS.

Just drop us a note and let us know if you want to explore whether Citus is right for you and your SaaS application.

Hacker NewsHigh refined sugar intake linked to a 23% higher risk of mental disorders
Comments
Ars TechnicaGenetic evidence suggests the Canaanites weren’t destroyed after all

Claude Doumet-Serhal

The Canaanites are famous as the bad guys of the Book of Joshua in the Tanakh, or the Hebrew Bible. First, God orders the Hebrews to destroy the Canaanites along with several other groups, and later we hear that the Canaanites have actually been wiped out. Among archaeologists, however, the Canaanites are a cultural group whose rise and fall has remained a mystery. Now, a group of archaeologists and geneticists have discovered strong evidence that the Canaanites were not wiped out. They are, in fact, the ancestors of modern Lebanese people.

The Canaanites were a people who lived three to four thousand years ago off the coast of the Mediterranean, and their cities were spread across an area known today as Jordan, Lebanon, Israel, Palestine, and Syria. Though they were one of the first civilizations in the area to use writing, they wrote most of their documents on papyrus leaves that didn't survive. As a result, our only information about these people has come from their rivals and enemies, like the Hebrews, whose accounts were likely biased.

Read 12 remaining paragraphs | Comments

Ars TechnicaGoop doctor says she’s not really Goop’s doctor, calls site a “caricature”

Enlarge (credit: https://aviva.herb-pharm.com/)

A doctor who appeared to vouch for and defend Gwyneth Paltrow’s high-profile lifestyle and e-commerce site, Goop, now says that she does not see herself as a Goop doctor and would not endorse the site, according to an interview with Stat.

Two weeks ago, Dr. Aviva Romm provided a signed letter included in a Goop post titled “Uncensored: A Word from Our Doctors.” The post, written in part by the Goop team, including Romm and another doctor (Steven Gundry), collectively defended Goop’s questionable health products and penchant for unproven and often nonsensical medical theories. Those theories include Moon-powered vaginal eggs and energy-healing space-suit stickers.

The post was written in response to a wave of online criticism from journalists, medical professionals, and patient advocates, particularly blogger Dr. Jen Gunter, an Ob/Gyn who has written often about Goop.

Read 7 remaining paragraphs | Comments

Hacker NewsI Almost Left Tech Today, Here’s Why
Comments
jwzShortlinks
Shortlinks are terrible for all kinds of reasons, but this post isn't about that. But let me get that part out of the way first:

So, all that aside -- it's still an interesting numerical / bit-twiddling problem, on a purely technical level.

Back when I switched to WordPress, I noticed that the "shortlinks" it generated for every post were terrible. They really weren't that short at all, just appending the base 10 numeric post ID to the blog's base URL. They were barely shorter than the long URL that includes the post's whole subject. So I wrote a plugin to do better. For example, the blog post:

has this default shortlink:

Other services give us:

My code gives us:

I did that by just encoding the post's ID number in base64, which is the same thing those other shorteners do, except that the ID in question is intrinsic to the post. Other shorteners either just increment a global variable, or pick a random non-conflicting number. Of course the smaller that number is, the more traversable the space is, which can be a problem.

But since the post's ID number isn't a secret maybe it could be shorter? Could it be fewer than 4 bytes? Sure, if your post IDs were smaller. By default, a brand new WordPress blog gives its first post the ID 100, which encodes as "ZA". This blog currently has 9469 posts, so that would have still been way down in the three-byte space, "JP0". The post IDs don't increase quite monotonically (the number increases every time you do a preview, among other things), but it still would have fit in three.

Unfortunately, I used to host my blog on Livejournal, and only migrated it here in 2010. The tool I used to import the blog preserved Livejournal's post ID numbers in the WordPress database. Those were already four bytes: "FDWn" was the last one. And then immediately after that, something went wonky with the import, and subsequent WordPress IDs jumped by eleven million for some reason, all the way up to "ygO-". If I had noticed it at the time, I could have done surgery to pull that number back down, but since then there have been almost 5,000 more posts, and I suspect that WordPress might lose its mind if post IDs are non-increasing. It doesn't matter, though, because these IDs will still fit in 4 bytes for the next 3.5 million posts.

Anyway, a few weeks ago I decided to waste some time making shortlinks for the DNA web site. Since there was no Livejournal fuckery, the WP blog over there already had nice and small IDs that fit in three bytes, so its shortlinks looked like http://dnalounge.com/. But I thought it might be interesting to make shortlinks for the various other pages on the site, too. Most of those pages are date-based, so that suggests a way to generate unique IDs that are predictable and do not require a global counter: just use the date! But a time_t is a big number that takes six bytes to encode, so that won't do.

So I computed the number of days since the Epoch instead of the number of seconds (no, you can't just divide, because of leap years and daylight savings). Then there's the matter of the directory (is this a blog post, a calendar page, a flyer page, a gallery page?) and the room suffix (is this a daytime event in the main room, a nighttime event in Above DNA, etc?) So I use 3 bits for each of those, adding 6 bits to the 15-bit day number, and a 21-bit number still handily encodes as 4 bytes.

So here's a gallery: http://dnalounge.com/ and its calendar page: http://dnalounge.com/ and flyer: http://dnalounge.com/ and a blog post from around the same time: http://dnalounge.com/. That they start with low capital letters means there's plenty of space left.

Of course those aren't actually all that short, since unsurprisingly, whoever was squatting "DNA.com" back in 1998 never answered my email when I tried to find out what their price for it would be. But if someone wanted to buy me "dnaloun.ge" from the Registrar of the Great Nation of Georgia, I wouldn't say no.

BTW, autocomplete keeps changing "shortlink" to "chortling", which is what I think we should call them now.

Previously.

Hacker NewsZuckerberg and Musk Are Fighting About Their Personal Brands, Not AI
Comments
Game WisdomGame Industry Insight Into the Unity Brand Issue

Game Industry Insight Into the Unity Brand Issue Josh Bycer josh@game-wisdom.com

Today’s Industry Insight looks at the growing criticism and attacks on Unity from a brand perspective. I talked about how the popular game engine is being viewed by consumers and developers, and why this should be a bigger deal.

The post Game Industry Insight Into the Unity Brand Issue appeared first on Game Wisdom.

Hacker NewsSci-Hub’s cache of pirated papers is so big, subscription journals are doomed
Comments
Hacker NewsRescale (YC W12)is Hiring for Software Engineers in San Francisco
Comments
Ars TechnicaTwitter’s stock plunges as user growth stalls

Enlarge / Traders at the New York Stock Exchange beneath a monitor displaying Twitter's stock symbol in 2016. (credit: Michael Nagle/Bloomberg via Getty Images)

Several years ago, Twitter seemed like it would be the social media darling of the decade. Founders had dreams of being the first Internet company to reach one billion users, making it "the pulse of the planet."

That's not going to happen, and investors are cluing in. Twitter had 328 million average monthly active users, or MAU, in the three months ending in June, which is unchanged from the previous quarter. The company's shares were down more than 10 percent this morning on the news.

The news comes despite Twitter's role in the daily news cycle perhaps being more prominent than ever, given the platform often serves as President Donald Trump's favored medium of expression.

Read 6 remaining paragraphs | Comments

Hacker NewsResearchers shut down AI that invented its own language
Comments
Hacker NewsSuccessful Solo Founders
Comments
Ars TechnicaStealthy Google Play apps recorded calls and stole e-mails and texts

Enlarge (credit: portal gda)

Google has expelled 20 Android apps from its Play marketplace after finding they contained code for monitoring and extracting users' e-mail, text messages, locations, voice calls, and other sensitive data.

The apps, which made their way onto about 100 phones, exploited known vulnerabilities to "root" devices running older versions of Android. Root status allowed the apps to bypass security protections built into the mobile operating system. As a result, the apps were capable of surreptitiously accessing sensitive data stored, sent, or received by at least a dozen other apps, including Gmail, Hangouts, LinkedIn, and Messenger. The now-ejected apps also collected messages sent and received by Whatsapp, Telegram, and Viber, which all encrypt data in an attempt to make it harder for attackers to intercept messages while in transit.

The apps also contained functions allowing for:

Read 3 remaining paragraphs | Comments

Ars TechnicaIn wake of CTE study, Ravens’ smarty John Urschel retires from football at 26

Enlarge / John Urschel, #64 of the Baltimore Ravens, retired from football. (credit: Getty | Matt Hazlett)

John Urschel, a Baltimore Ravens’ offensive lineman and PhD candidate in applied mathematics at MIT, has announced his retirement from football at the age of 26. The announcement comes just days after publication of a case study that found widespread signs of a degenerative brain disease among football players who donated their brains to research.

"This morning John Urschel informed me of his decision to retire from football," Ravens’ coach John Harbaugh said in a statement. "We respect John and respect his decision. We appreciate his efforts over the past three years and wish him all the best in his future endeavors."

Urschel played with the Ravens for three seasons and was competing for the starting center job. Thus far, he has not publicly discussed his reasoning for the early and abrupt retirement, which was announced just before the first full-team practice. However, a team source told ESPN that his decision was linked to the new brain study.

Read 6 remaining paragraphs | Comments

Hacker NewsLaunch HN: Sunu (YC S17) – Sonar wristband helping blind people navigate
Comments
Ars TechnicaReport: Human embryo edited for first time in US, pushes limits

Enlarge (credit: Getty | Media for Medical )

A team of researchers in Oregon have become the first in the US to attempt genetically altering human embryos, according to reporting by MIT Technology Review. The attempt is said to represent an advance in the safety and efficacy of methods used to correct genetic defects that spur disease.

Until now, the only three published reports of human embryo gene editing were from researchers in China. But their experiments—using a gene-editing method called CRISPR—caused “off-target” genetic changes, basically slopping edits in the DNA that were not intended. Also, not all the cells in the embryos were successfully edited, causing an effect called “mosaicism.” Together, the problems suggested that the technique was not advanced enough to safely alter human embryos without unintended or incomplete genetic consequences.

Scientists familiar with the new US work told MIT Technology Review that the Oregon team has improved these issues. They’re said to have shown in experiments with “many tens” of human embryos that they can correct genetic mutations that cause disease while avoiding mosaicism and off-target effects. Their improved method allows for earlier delivery of CRISPR into cells at the same time sperm fertilize an egg.

Read 5 remaining paragraphs | Comments

Ars TechnicaMicrosoft rationalizes and rebrands Windows 10, Office updates again

One of the more visible aspects of Windows as a Service is that Microsoft has been learning as it goes along, and didn't come straight out the gate with a clear vision of precisely how Windows updates would be delivered, or when. Initially the plan was to push each release out to consumers as the "Current Build" (CB),  and a few months later bless it as good for businesses, as the "Current Build for Business" (CBB).

A clearer plan has been crystalizing over the last few months, first with the announcement in April that Windows and Office would have synchronized, twice-annual releases, and then June's announcement that Windows Server would also be on the semi-annual release train.

Today, Microsoft has put all the pieces together and delivered what should be the long-term plan for Windows, Windows Server, and Office updates. It's not a huge shake-up from the cobbled together plan before, but the naming is new and consistent.

Read 6 remaining paragraphs | Comments

Hacker NewsA Sculpture Controlled by Live Honeybees
Comments
Ars TechnicaCable lobby claims US is totally overflowing in broadband competition

(credit: Free Press)

Are you ever frustrated about a lack of choice for home Internet providers? Well, worry no more. The nation's top cable lobby group is here to let you know that the US is simply overflowing in broadband competition.

In a new post titled, "America's competitive TV and Internet markets," NCTA-The Internet & Television Association says that Internet competition statistics are in great shape as long as you factor in slow DSL networks and smartphone access.

Competition isn’t just the rule in television, it defines broadband markets as well. In spite of living in one of the largest and most rural nations, 88 percent of American consumers can choose from at least two wired Internet service providers. When you include competition from mobile and satellite broadband providers, much of America is home to multiple competing ISPs leveraging different and ever-improving technologies. This competition has led to rapid progress in the quality of consumer internet connections with average peak speeds in America quadrupling over the last five years, from 23.4 Mbps to 86.5 Mbps and the average price per megabit dropping 90 percent in 10 years, from $9.01 per megabit per second to $0.89 per megabit per second.

Many Americans who feel that they have only one viable choice for home broadband might think that cable lobbyists are describing an alternate reality. But it's easy to see the difference between NCTA marketing and Internet users' actual experiences. Yes, if you factor in any wireline home Internet provider offering any speed, then US customers can generally choose between a fast cable network and a slow DSL one. But if one of your two options isn't fast enough to meet your needs, then there's really just one choice.

Read 12 remaining paragraphs | Comments

Ars TechnicaApple discontinues iPod nano and shuffle, updates iPod Touch models

(credit: Chris Foresman)

You'll see no mention of the iPod nano or iPod shuffle on Apple's website anymore. Today, the company removed the two media players from its website, and reports suggest the company is discontinuing both devices. A report from Business Insider includes a statement from an Apple spokesperson citing the "simplifying" of the iPod lineup.

"Today, we are simplifying our iPod lineup with two models of iPod touch now with double the capacity starting at just $199 and we are discontinuing the iPod shuffle and iPod nano," reads the statement from an Apple spokesperson.

Some of the most affordable products in Apple's lineup, the iPod nano started at $149 and the iPod shuffle started at $49. Both devices have been sitting on the back burner for a while: Apple hasn't introduced a meaningful update to either device since 2012, only adding new colors options for both in 2015.

Read 3 remaining paragraphs | Comments

Hacker NewsHill for the data scientist: an xkcd story
Comments
Hacker NewsChina Is Engineering Genius Babies (2013)
Comments
Hacker NewsRavens OL John Urschel, 26, retires abruptly, two days after CTE study
Comments
Hacker NewsHow to launder $4B worth of Bitcoin
Comments
Ars TechnicaLG disappointed by sales of flagship smartphone

Enlarge / The LG G6.

Another year, another proclamation from LG that its flagship smartphone isn't selling as well as expected. Last year it was the LG G5, when the company blamed a bad quarter on "weak sales of [the] G5." The year before that, it was the LG G4, which had sales that "fell short of expectations." This year it's the LG G6—in its latest earnings report, LG blamed the "challenging" quarter on “weaker than expected premium smartphone sales and increase in component costs.”

A a whole, LG is doing fine, with the company reporting that "Three of the company’s four main business units reported higher revenues than a year ago." Home Appliances, Home Entertainment, and Vehicle components are the three seeing improvements, while the mobile division is lagging behind.

LG's strategy with the G6 never made a ton of sense, seeming like it was only aiming for "second place" behind Samsung. LG launched the G6 in 2017 but used Qualcomm's old 2016 SoC, the Snapdragon 821. At the launch event for the G6, LG said it shipped last year's SoC in this year's phone in a bid to get to market faster than the Snapdragon 835 devices. By the time LG finally got around to launching the G6 in the US, though, it was already one week after the Galaxy S8 launch event. If LG had planned to beat Samsung and other Snapdragon 835 devices to market, that lead seemed to have evaporated at some point. The G6 technically had a three-week head start on the S8, but by the time the G6 was available the S8 launch event already happened, the phone was unveiled, and Samsung was already taking preorders.

Read 2 remaining paragraphs | Comments

Ars TechnicaFeds indict a leading Bitcoin exchange for money laundering

(credit: Zach Copley)

Yesterday morning we reported on the arrest of a Russian man suspected of running a $4 billion dollar money laundering scheme. Later in the day, US officials released the indictment against the suspect, Alexander Vinnik.

That indictment reveals that the alleged $4 billion money laundering operation was actually BTC-e, one of the internet's most popular Bitcoin exchanges. According to the feds, BTC-e didn't comply with anti-money laundering laws that require financial businesses to collect information about their customers and report suspicious activity to the authorities. As a result, it became popular with ransomware authors looking to cash in their ill-gotten bitcoins and drug traffickers and other criminals looking to move money around the world.

The feds also suggest that Vinnik was a central figure in the massive bitcoin theft that was a major factor in the downfall of Mt. Gox, the Japanese Bitcoin exchange that led the market in Bitcoin's early years. If those allegations are confirmed, it would lay to rest one of the biggest unsolved crimes in the Bitcoin world.

Read 9 remaining paragraphs | Comments

Hacker NewsZeppelinOS: an operating system for smart contract applications
Comments
Hacker NewsTSA will require separate screenings for electronics larger than a cell phone
Comments
Hacker NewsPorting an historic Python2 module into Python3
Comments
Hacker NewsApple Apparently Discontinues iPod Nano and iPod Shuffle
Comments
A List ApartPractical User Research: Creating a Culture of Learning in Large Organizations

Enterprise companies are realizing that understanding customer needs and motivations is critical in today’s marketplace. Building and sustaining new user research programs to collect these insights can be a major struggle, however. Digital teams often feel thwarted by large organizations that are slow to change and have many competing priorities for financial investments.

As a design consultant at Cantina, I’ve seen companies at wildly different stages of maturity related to how customer research impacts their digital work. Sometimes executives struggle to understand the value without quantifiable numbers. Other times engineering teams see customer research and usability testing as a threat to delivery dates.

While you can’t always tackle these issues directly, the great thing about large organizations is that they’re brimming with people, tools, and work practices forming an overall culture. By understanding and utilizing each of these organizational resources, digital teams can create an environment focused on learning from customers.

I did some work recently for a client I’ll call WorkTech, who had this same struggle aligning their digital projects with the needs of their customers. WorkTech was attempting to redesign their entire ecommerce experience with a lean budget and team. In a roughly six month engagement, two of us from Cantina were tasked with getting the project back on track with a user-centered design approach. We had to work fast and start bringing customer insights to bear while moving the project forward. Employing a pragmatic approach that looked at people, tools, and work practices with a fresh set of eyes helped us create an environment of user research that better aligned the redesign with the needs of WorkTech’s customers.

Get comfortable talking to People in different roles

Effective user research programs start and end with people. Recognizing relationships and the motivations held by everyone interacting with a product or service encourages goodwill and can unearth key connections and other, less tangible benefits. To create and sustain a culture of learning in your company, find a group of users to interview—get creative, if you have to—and enlist the support of teammates and stakeholders.

Begin by taking stock of anyone connected to your product. You won’t always find a true set of end users internally, but everyone can help raise awareness of the value of user research—and they can help your team sustain forward progress. Ask yourself the following questions to find allies and research resources:

Our WorkTech project didn’t have a formal research budget for recruiting users (or any other research task). What we did have going for us was a group of internal users who gave our team immediate access to an initial pool of research participants. The primary platform we were hired to help redesign was used by two groups: WorkTech employees and the customers they interacted with. Over time, our internal users were able to connect us with their external counterparts, amplifying the number of people offering feedback significantly.

Maximize the value of every interview

While interviewing external customers, we kept an eye on the long term success of our research program and concluded each session by asking participants:

During each conversation, we also identified distinct areas of expertise for each user. This allowed us to better target future usability testing sessions on specific pieces of functionality.

Using this approach, our pool of potential participants grew exponentially and we gained insight into the shared motivations of different user personas. Taking stock of such different groups of people using the platform also revealed opportunities that helped us prioritize different aspects of the overall redesign effort.

Find helpful Tools that are already available or free

Tools can’t create an effective user research program on their own, but they are hugely helpful during the actual execution of research. While some organizations have an appetite for purchasing dedicated “user research” platforms able to handle recruitment, scheduling, and session recordings, many others already have tools in place that bring value to the organization in different areas. If your budget is tiny (or nonexistent), you may be able to repurpose or extend the software and applications your company already uses in a way that can support talking to customers.

Consider the following:

We discovered early on at WorkTech that our internal user base had very similar toolsets because of company-wide technology purchases. Virtually all employees already had Cisco Webex installed and were familiar with remote conferencing and sharing their screen.

WorkTech offices and customers were spread across the continental United States, so it made sense for our sessions to be remote, moderated conversations via phone and teleconference. Using Webex allowed the internal users to focus on the actual interview, avoiding the friction they might have felt attempting to use new technology.

Leveraging pre-existing tools also meant we could expand our capabilities without incurring significant new costs. (The only other tool we introduced was a free InVision account, which allowed us to create simple prototypes of new UI concepts, conduct weekly usability tests, and document and share our findings quickly and easily.)

Document and define templates as you go

Many digital research tools are simply well-defined starting points—templates for the various types of communication and idea capture needed. If purchasing access to these automated tools is out of the question, using a little elbow grease can be equally effective over time.

At WorkTech, maintaining good documentation trails minimized the time spent creating new materials for each round of research and testing. For repeatable tasks like creating scripts and writing recruitment emails, we simply saved and organized each document as we created it. This allowed us to build a library of reusable templates over time. Even though it was a bit of a manual effort, this payoff increased with every additional round of usability testing.

Utilizing available tools eliminates another significant hurdle to getting started—time delays. In large organizations with tight purchase protocols, using repurposed and free tools can enable teams to get moving quickly. Filling in the remaining gaps with good documentation and repeatable templates covers a wide range of scenarios and doesn’t let finances act as a blocker when collecting insights from users. 

Take a fresh look at your company’s Work Practices

A culture of learning won’t be sustainable over the long term if the lessons learned from user research aren’t informing what is being built. Bringing research insights to bear on your product or site is where everything pays off, ensuring digital teams can focus on what delivers the highest value to customers.

Being successful here requires a keen understanding of the common processes your organization uses to get things accomplished. By being aware of an organization’s current work practices (not some utopian version), digital teams can align what they’ve learned with practices that help ensure solutions get shipped.

Dig into the work practices in your organization and identify ways to meet people where they are:

The WorkTech team collaborating with us on the project already had weekly meetings on the calendar, with an open agenda for high priority items. Knowing it would be important to get buy-in from this group, we set up our research and usability sessions each week on Tuesdays. This allowed us to establish a cadence where every Tuesday we tested prototypes, and every Wednesday we presented findings at the WorkTech team meeting. As new questions or design concepts to validate came up, the team was able to document them, pause any further debates, and move on to other topics of discussion. Everyone knew testing was a weekly occurrence, and within a few weeks even the most skeptical developer started asking us to get customer feedback on specific features they were struggling with.

Schedule regular customer sessions even before you are “ready”

Committing to a cadence of regular weekly sessions also allowed us to separate scheduling from test prep tasks. We didn’t wait to schedule sessions only when we desperately needed feedback. Because the time was already set aside each Tuesday, we simply had to develop questions and tests for the highest priority topic at that point in time. If something wasn’t quite ready, the next set of sessions was only a week away.

Using these principles, we conducted 40+ sessions over the course of 12 weeks, gathering valuable insights from the two primary user groups. We were able to gather quantifiable data pointing to one design pattern over another, which minimized design debates and instilled confidence in the research program and the design. In addition to building relationships with users across the spectrum, the sessions helped us uncover several key interface issues that we were then able to design around.

Even more valuable than the interface issues were general uses cases that came to light, where the website experience was only one component in a large ecosystem of internal processes at customers’ organizations. These insights proved valuable for our redesign project, but also provided a deeper understanding of WorkTech’s customer base, helping to prove the value of research efforts to key stakeholders.

Knowing the schedules and team norms in your organization is critical for creating a user research program whose insights get integrated into the design and development process. The insights of a single set of sessions are important, but creating a culture surrounding user research is more valuable to the long term success of your product or site, as is a mindset of ongoing empathy toward users.

To help grow and sustain a culture of research, though, teams have to be able to prove the value in financial terms. Paul Boag said it elegantly in the third Smashing Magazine book: “Because cost is (often a) primary reason for not testing, money has to be part of your justification for testing.”

The long term success of your program will likely be tied to how well you can prove its ROI in business terms, even though the methods described here minimize the need to ask for money. In other words, translate the value of user research to terms any business person can understand. Find ways to quantify the work hours currently lost to feature debates and building un-validated features, and you’ll uncover business costs that can be eliminated.

User research doesn’t have to be a big dollar, corporate initiative. By paying attention to the people, tools, and work practices within an organization, your team can demonstrate the value of user research on the ground, which will open doors to additional resources in the future.

Hacker NewsBitcoin Cash: Why It's Forking the Blockchain and What That Means
Comments
Ars TechnicaMeta-analysis finds sperm counts dropped 50%, media predicts human extinction

Enlarge (credit: Getty | BSIP)

Men’s spunk may be getting noticeably less spunky in some high-income countries, according to a meta-analysis of international swimmers.

Skimming and re-examining sperm data from 185 past independent studies, researchers estimated that sperm counts of men from select high-income regions—North America, Australia, New Zealand, and Europe—dropped about 52 percent between 1973 to 2011, from 99 million sperm per milliliter to about 47 million per milliliter. Likewise, estimates of total sperm count per batch dropped 59 percent, from 337.5 million in 1973 to 137.5 million in 2011.

The researchers, led by Hagai Levine of Hebrew University, also looked at data from what they referred to as “other” countries, including some in South America, Asia and Africa. They saw no trends in these places, but they also had relatively little data from them.

Read 10 remaining paragraphs | Comments

Hacker NewsFacebook with Adblocker makes 2000+ requests
Comments
Hacker NewsShow HN: Nuclino – If Trello and Google Docs Had a Baby
Comments
Hacker NewsIn Conversation: Trent Reznor
Comments
Hacker NewsForgotten Religions That Worshipped Electricity
Comments
Hacker NewsWhy Sonic the Hedgehog is 'incorrect' game design
Comments
Hacker NewsThe AMD Ryzen 3 1300X and Ryzen 3 1200 CPU Review: Zen on a Budget
Comments
Hacker NewsAWS is having widespread issues
Comments
Hacker NewsProject Snowflake: Non-blocking safe manual memory management in .NET
Comments
Hacker NewsIs SQS down?
Comments
Hacker NewsThe Hijacking of the Brillante Virtuoso
Comments
Ars TechnicaYouTube Red and Google Play Music may merge into one service

(credit: Flickr: Rego Korosi )

Google is notorious for having many services that do similar things, like its array of chat apps. Google's music services have been fragmented for years, but the company may change that soon. According to a report from The Verge, YouTube's head of music Lyor Cohen stated at the New Music Seminar conference in New York last night that YouTube Red and Google Play Music should merge to make a singular, cohesive service.

Although the report doesn't mention YouTube Music (which is a another separate service), it's safe to say that all three streaming offerings could be combined into one. Google merged the YouTube Music and Google Play Music product teams together earlier this year, and that move came shortly after the business development teams for both services merged in 2016.

Google didn't confirm or deny the merger, but the company did say users would be given notice well before any big changes occur. "Music is very important to Google and we’re evaluating how to bring together our music offerings to deliver the best possible product for our users, music partners, and artists," reads the statement in the report. "Nothing will change for users today and we’ll provide plenty of notice before any changes are made."

Read 5 remaining paragraphs | Comments

Lambda the UltimateImplementing Algebraic Effects in C

Implementing Algebraic Effects in C by Daan Leijen:

We describe a full implementation of algebraic effects and handlers as a library in standard and portable C99, where effect operations can be used just like regular C functions. We use a formal operational semantics to guide the C implementation at every step where an evaluation context corresponds directly to a particular C execution context. Finally we show a novel extension to the formal semantics to describe optimized tail resumptions and prove that the extension is sound. This gives two orders of magnitude improvement to the performance of tail resumptive operations (up to about 150 million operations per second on a Core i7@2.6GHz)

Another great paper by Daan Leijen, this time on a C library with immediate practical applications at Microsoft. The applicability is much wider though, since it's an ordinary C library for defining and using arbitrary algebraic effects. It looks pretty usable and is faster and more general than most of the C coroutine libraries that already exist.

It's a nice addition to your toolbox for creating language runtimes in C, particularly since it provides a unified, structured way of creating and handling a variety of sophisticated language behaviours, like async/await, in ordinary C with good performance. There has been considerable discussion here of C and low-level languages with green threads, coroutines and so on, so hopefully others will find this useful!

Hacker NewsShow HN: A 2D physics simulator in JavaScript
Comments
Hacker NewsJeff Bezos Surpasses Bill Gates as World's Richest Person
Comments
Hacker NewsShow HN: The JavaScript Way, a book for learning modern JavaScript from scratch
Comments
Ars TechnicaThirty Meter Telescope gets a construction permit—with conditions

Enlarge / On the road, near the summit. There are presently 10 optical telescopes on top of Mauna Kea. (credit: Eric Berger)

The Big Island of Hawaii has perhaps the best astronomical seeing conditions in the northern hemisphere, and the University of California system and Caltech have a $1.4 billion plan to build the world's largest telescope there. The Thirty Meter Telescope (TMT) would open up an unprecedented window into the early history of the universe—and other unknown wonders.

But some native Hawaiians do not want further telescopes built on the sacred summit of Mauna Kea, which at nearly 14,000 feet is the highest point in the chain of Pacific islands. They have put up fierce opposition to the telescope's construction alongside other instruments already on the summit and have scored some wins. For example, after the state's Board of Land and Natural Resources issued a building permit to the TMT institutions, the State Supreme Court invalidated it in 2015 because proper state procedures had not been followed.

Now, the telescope builders have won an important victory. On Wednesday retired judge Riki May Amano, who is overseeing contested-case hearing, issued a ruling that granted the TMT institutions a construction permit. It included 31 conditions, such as "ensuring that employees attend mandatory cultural and natural resources training," and a "substantial" but unspecified amount of sublease rent.

Read 3 remaining paragraphs | Comments