About

Feeds

 

〰 Tidal, Archiloque's feed reader

Ars TechnicaPresident Trump: “We have to do something” about violent video games, movies

Enlarge / Donald Trump starred in this widely panned video game released in 2002. His White House comments on Thursday did not reference its potential influence on America's youth. (credit: Activision)

In a White House meeting held with lawmakers on the theme of school safety, President Donald Trump offered both a direct and vague call to action against violence in media by calling out video games and movies.

"We have to do something about what [kids are] seeing and how they're seeing it," Trump said during the meeting. "And also video games. I'm hearing more and more people say the level of violence on video games is shaping more and more people's thoughts."

Trump followed this statement by referencing "movies [that] come out that are so violent with the killing and everything else." He made a suggestion for keeping children from watching violent films: "Maybe they have to put a rating system for that." The MPAA's ratings board began adding specific disclaimers about sexual, drug, and violent content in all rated films in the year 2000, which can be found in small text in every MPAA rating box.

Read 6 remaining paragraphs | Comments

Ars TechnicaCar companies are preparing to sell driver data to the highest bidder

Enlarge (credit: Getty)

The confluence of the technology and automotive industries has given us mobility. It's not a great name, conjuring images of people riding rascal scooters in big box stores or those weird blue invalid carriages that the government handed out in the UK back in the last century. But in this case, it's meant as a catch-all to cover a few related trends: autonomous driving, ride-hailing, and connected cars. The last of these is what I'm here to discuss today. Specifically, the results of a pair of surveys: one that looks at consumer attitudes and awareness of connected cars and another that polled industry people.

Love ’em or hate ’em, connected cars are here to stay

Connected cars are booming. On Tuesday, Chetan Sharma Consulting revealed that 2017 saw more new cars added to cellular networks than new cellphones. In particular, it noted that AT&T has been adding a million or more new cars to its network each quarter for the last 11 quarters. While Chetan Sharma didn't break out numbers for other service providers, it also revealed that Verizon is set to make at least $1 billion from IoT and telematics. And previous research from Gartner suggested that, this year, 98 percent of new cars will be equipped with embedded modems.

OEMs aren't just connecting cars for the fun of it; the idea is to actually improve their customers' experience with the cars. But right now, we're still missing an actual killer app—and to be honest, data on how many customers renew those cell contracts for their vehicles. A survey out this week from Solace that polled 1,500 connected car owners found that they still don't really trust the technology.

Read 14 remaining paragraphs | Comments

Hacker NewsEffect of longer-term modest salt reduction on blood pressure
Comments
Hacker NewsSprint no longer assigning ipv4 addresses
Comments
jwzGucci Fall 2018 Ready-to-Wear
Alessandro Michele:

A procession of transhumans, walking in trancelike step through a suite of operating theaters: Bolted together from the clothing of many cultures, they were Alessandro Michele's metaphor for how people today construct their identities -- a population undergoing self-regeneration through the powers of tech, Hollywood, Instagram, and Gucci. "We are the Dr. Frankenstein of our lives," said Michele. "There's a clinical clarity about what I am doing." Someone was cradling a baby dragon. A couple of people had replicas of their own heads tucked under their arms.



This exists, and yet 2049 and Altered Carbon looked like 1992. Even in an environment where moviemakers think that "worldbuilding" means "graphic design" instead of "economics and history", they have failed the future.

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.
Ars TechnicaSpreading crushed rock on farms could improve soil and lower CO₂

Enlarge / Instead of adding crushed limestone to soil, we could opt for basalt. (credit: Mark Robinson)

The best response to a leaking pipe is to stop the leak. But even if you haven’t quite got the leak solved, a mop can keep the pool of water on your floor from spilling into the next room.

That’s kind of the situation we’re in with our emissions of greenhouse gases. The only real solution is to stop emitting them, but anything that removes existing CO2 from the atmosphere could help lower the peak warming we experience. Some techniques to do that sound like pipe dreams when you consider scaling them up, but others can plausibly make at least modest contributions.

A new paper from a group of authors led by David Beerling of the University of Sheffield argues the case that something that sounds a little wild—spreading crushed basalt over the world’s croplands—could actually be pretty practical.

Read 11 remaining paragraphs | Comments

Planet PostgreSQLJoshua Drake: PostgresConf US 2018: Schedule almost granite!
With more than 200 events submitted and approximately 80 slots to be filled, this has been the most difficult schedule to arrange in the history of PostgresConf. By far, the majority of content received we wanted to include in the schedule. It is that level of community support that we work so hard to achieve and we are thankful to the community for supporting PostgresConf. There is no doubt that the number one hurdle the community must overcome is effective access to education on Postgres. The US 2018 event achieves this with two full days of training and three full days of breakout sessions, including the Regulated Industry Summit and Greenplum Summit.

For your enjoyment and education here is our almost granite schedule!

See something you like? Then it is time to buy those tickets!

This event would not be possible without the continued support from the community and our ecosystem partners:

Hacker NewsTesting Nvidia GTX 1050 on Generative Adversarial Network (GAN)
Comments
Ars TechnicaWindows 10 Fall Creators Update reaches 85 percent of PCs

Enlarge (credit: AdDuplex)

The Windows 10 Fall Creators Update is now on almost all Windows 10 PCs, reaching 85 percent of machines according to the latest numbers provided by AdDuplex.

One swallow doesn't make a summer, but the rollout of version 1709 suggests that Microsoft has found its rhythm for these updates. In response to a range of annoying problems around the deployment of version 1607, the company was very conservative with the release of version 1703. Microsoft uses a phased rollout scheme, initially pushing each update only to systems with hardware configurations known to be compatible and then expanding its availability to cover a greater and greater proportion of the Windows install base.

Version 1703 was only installed on around 75 percent of Windows 10 PCs when 1709 was released. 1709 has already passed that level, and we're still some weeks away from the release of 1803. Microsoft hasn't yet announced when that version will be released, but based on the releases of 1709 and 1703, we'd be very surprised to see it before around mid-April. The new version also doesn't yet have a name; we've hoped that Microsoft would just stick with version numbers (as the year-month version numbers are easy to understand and compare), but so far the company hasn't said anything on the matter.

Read 2 remaining paragraphs | Comments

Hacker NewsYC's Series A Diligence Checklist
Comments
jwzWhen you're such a sociopath that your staff hands you cue cards explaining how to feign empathy.
Exhibit A:

Exhibit B:

It's the embroidered cuffs that really lock in that "empathy" dollar.
That's a big dollar, huge dollar.

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Ars Technica100-mile-range electric delivery van could beat diesel in lifetime cost

Enlarge / A mockup of the Workhorse truck. (credit: Workhorse)

Electric van company Workhorse announced today that it will provide 50 custom-made all-electric vans with 100 miles of range to UPS for a price lower than that of comparable off-the-shelf diesel vans, without subsidies.

Getting cost-competitive with diesel vans in acquisition price is a big step, especially because total cost of ownership (TCO) is expected to be lower on electric vehicles. That means the Workhorse vans could be significantly cheaper than comparable vans over time.

TCO is generally lower on electric vehicles because fewer moving parts means less maintenance and, as long as filling up a tank with gasoline costs more than charging up a car on electricity, electric vehicle owners can expect to save over the lifetime of the vehicle. But electric vehicle upfront cost tends to be higher than that of a traditional vehicle because lithium-ion batteries are relatively expensive.

Read 6 remaining paragraphs | Comments

Ars TechnicaStar Control countersuit aims to invalidate Stardock’s trademarks

Enlarge

This morning, original Star Control creators Fred Ford and Paul Reiche III filed a response to Stardock’s Star Control lawsuit, which seeks injunctions and damages against Ford and Reiche for, among other things, alleged willful and intentional trademark infringement and trademark counterfeiting. Ford and Reiche also filed a countersuit against Stardock seeking their own injunctions and damages. The response and counterclaim can be viewed here and here respectively. Stardock's original filing is over here.

The filings are the latest escalation in what is turning into a deeply acrimonious legal battle over who possesses the rights to publish and sell the classic Star Control trilogy of video games—and who has the rights to create new Star Control games. (Or at least who can name their games "Star Control.")

It’s a twisted tale, and understanding what is going on requires digging back through 30 years of agreements and contracts involving companies that no longer exist. It’s not quite as screwed up as the situation around No One Lives Forever, but it is a hot mess—and now that litigation is starting, things stand to get a lot messier.

Read 41 remaining paragraphs | Comments

Ars TechnicaAmateur astronomer tries out new camera, catches supernova at its start

Enlarge / The dot in the lower-right foreground is the supernova, from an image taken by an amateur astronomer. (credit: Víctor Buso and Gastón Folatelli)

Back in 2016, an astronomy enthusiast named Víctor Buso decided it was a good night to test a new camera on his telescope. The test went well enough that hardware in space was redirected to image what he spotted, and Buso now has his name on a paper in Nature.

Lots of amateurs, like Buso, have spotted supernovae. That typically leads to a search of image archives to determine when the last time a specific location was imaged when the supernova wasn't present—this is often years earlier. Buso didn't have to search, because his first batch of images contained no sign of the supernova. Then 45 minutes later, it was there, and the supernova continued to brighten as he captured more pictures. Buso had essentially captured the moment when the explosion of a supernova burst out of the surface of a star, and the analysis of the follow-on observations was published on Wednesday.

It went boom

The odd thing about many supernova (specifically those in the category called type II) is that they're not explosions in the sense of the ones we experience on Earth. In a supernova, the core of the star collapses suddenly, triggering the explosion. But it happens so quickly that the outer layers of the star barely budge. The first overt sign of what's going on comes when the debris of the explosion reaches the surface of the star, a process called the breakout. This causes the star to suddenly brighten, a process that continues through some ups and downs for days afterward.

Read 8 remaining paragraphs | Comments

Ars TechnicaMore Amazon Go stores are coming to Seattle and Los Angeles, report says

Enlarge / Amazon Go from the outside. (credit: Sam Machkovech)

Amazon's partly automated retail store, Amazon Go, debuted with a location in Seattle in January, but a new report suggests that expansion plans are already underway. "Multiple people familiar with the company’s plans" told Recode that Amazon could open as many as six new stores by the end of 2018.

Most of these stores would be in Seattle, but Amazon also reportedly wants to expand beyond its home turf by opening one store in Los Angeles. It has spoken with the developer of The Grove, a large shopping plaza in the heart of LA just minutes from Hollywood and Beverly Hills. It's essentially an upscale, outdoor shopping mall—which is a common structure in LA. Amazon has previously debuted services in LA first after trialing them in Seattle, such as its Amazon Fresh grocery delivery service. LA will also be the first area to be served by a new shipping service planned by the company.

As for the Seattle stores, one source told Recode that Amazon had already identified three new locations last year.

Read 4 remaining paragraphs | Comments

Ars TechnicaVenezuela says its cryptocurrency raised $735 million—but it’s a farce

Enlarge / Nicolas Maduro, Venezuela's president, speaks during the petro cryptocurrency launch event in Caracas, Venezuela, on Tuesday, February 20, 2018. (credit: Wil Riera/Bloomberg via Getty Images)

Venezuelan President Nicolás Maduro claims a new state-sponsored cryptocurrency called the petro raised $735 million on Tuesday—its first day on sale. But the government hasn't provided any way to independently verify that $735 million figure. And there's reason to doubt almost everything the Venezuelan government has said about the project.

The presale was a disorganized mess, with basic technical details still being worked out after the sale supposedly began. The petro network itself hasn't launched yet—allegedly that will happen next month—and the government has hardly released any information about how it's going to work.

Moreover, there's little reason to believe that the petro will maintain its value over time. The Venezuelan government has portrayed petro tokens as backed by Venezuela's vast oil reserves, but they're not. The government is merely promising to accept tax payments in petros at a government-determined exchange rate linked to oil prices. Given the Venezuelan government's history of manipulating exchange rates, experts say investors should be wary of this arrangement.

Read 23 remaining paragraphs | Comments

Hacker NewsBruce Perens Seeks Mandatory Award of Legal Fees vs. Open Source Security Inc
Comments
Hacker NewsBatch Normalization for deep networks
Comments
Planet PostgreSQLHubert 'depesz' Lubaczewski: What is the benefit of upgrading PostgreSQL?
Couple of times, in various places, I was asked: what is the benefit from upgrading to some_version. So far, I just read release docs, and compiled list of what has changed. But this is not necessarily simple – consider upgrade from 9.3.2 to 10.2. That's a lot of changes. So, to be able to answer […]
Hacker NewsVertical.AI (YC W15) Is Hiring a Lead Machine Vision Researcher
Comments
Hacker NewsIn One Tweet, Kylie Jenner Wiped Out $1.3B of Snap's Market Value
Comments
Hacker NewsMy Python Development Environment, 2018 Edition
Comments
Ars TechnicaIn the age of the Switch, the Nintendo 3DS refuses to die

The little system that could (continue selling despite age and competition). (credit: Kyle Orland)

About a year ago, we took a look at some historical sales data and publicly speculated that sales for the Nintendo 3DS would quickly drop after the Nintendo Switch launch. But while Switch sales continue at a blistering pace, someone forgot to tell the people to stop buying Nintendo's older portable.

Industry tracking firm NPD reported yesterday that 3DS sales in the United States are healthier than ever, by some measures. In 2018, the system had its best January since 2014 in terms of dollar sales, and since 2013 in terms of unit sales. This despite the fact that there were no major releases for the system in the month (though big games like Pokemon Ultra Sun and Moon and Fire Emblem Warriors did come out just a few months ago).

It's hard to identify a trend in one surprisingly successful month, of course. But looking at 3DS sales more broadly shows the system continuing to find an audience in Switch's shadow. In the nine months following the Switch's late March 2017 launch, Nintendo shipped 5.86 million 3DSes worldwide. That's down just nine percent from the 6.42 million in sales over the same nine-month period in 2016, before the Switch was available. And it's down only a hair from 5.89 million shipments during the same period in 2015, when the 3DS was much newer.

Read 5 remaining paragraphs | Comments

Hacker NewsUPS is working on a fleet of 50 custom-built electric delivery trucks
Comments
Planet PostgreSQLMarco Slot: When Postgres blocks: 7 tips for dealing with locks

Last week I wrote about locking behaviour in Postgres, which commands block each other, and how you can diagnose blocked commands. Of course, after the diagnosis you may also want a cure. With Postgres it is possible to shoot yourself in the foot, but Postgres also offers you a way to stay on target. These are some of the important do’s and don’ts that we’ve seen as helpful when working with users to migrate from their single node Postgres database to Citus or when building new real-time analytics apps on Citus.

1: Never add a column with a default value

A golden rule of PostgreSQL is: When you add a column to a table in production, never specify a default.

Adding a column takes a very aggressive lock on the table, which blocks read and write. If you add a column with a default, PostgreSQL will rewrite the whole table to fill in the default for every row, which can take hours on large tables. In the meantime, all queries will block, so your database will be unavailable.

Don’t do this:

-- reads and writes block until it is fully rewritten (hours?)
ALTER TABLE items ADD COLUMN last_update timestamptz DEFAULT now();

Do this instead:

-- select, update, insert, and delete block until the catalog is update (milliseconds)
ALTER TABLE items ADD COLUMN last_update timestamptz;
-- select and insert go through, some updates and deletes block while the table is rewritten
UPDATE items SET last_update = now();

Or better yet, avoid blocking updates and delete for a long time by updating in small batches, e.g.:

do {
  numRowsUpdated = executeUpdate(
    "UPDATE items SET last_update = ? " +
    "WHERE ctid IN (SELECT ctid FROM items WHERE last_update IS NULL LIMIT 5000)",
    now);
} while (numRowsUpdate > 0);

This way, you can add and populate a new column with minimal interruption to your users.

2: Beware of lock queues, use lock timeouts

Every lock in PostgreSQL has a queue. If a transaction B tries to acquire a lock that is already held by transaction A with a conflicting lock level, then transaction B will wait in the lock queue. Now something interesting happens: if another transaction C comes in, then it will not only have to check for conflict with A, but also with transaction B, and any other transaction in the lock queue.

This means that even if your DDL command can run very quickly, it might be in a queue for a long time waiting for queries to finish, and queries that start after it will be blocked behind it.

When you can have long-running SELECT queries on a table, don’t do this:

ALTER TABLE items ADD COLUMN last_update timestamptz;

Instead, do this:

SET lock_timeout TO '2s'
ALTER TABLE items ADD COLUMN last_update timestamptz;

By setting lock_timeout, the DDL command will fail if it ends up waiting for a lock, and thus blocking queries for more than 2 seconds. The downside is that your ALTER TABLE might not succeed, but you can try again later. You may want to query pg_stat_activity to see if there are long-running queries before starting the DDL command.

3: Create indexes CONCURRENTLY

Another golden rule of PostgreSQL is: Always create your indexes concurrently.

Creating an index on a large dataset can take hours or even days, and the regular CREATE INDEX command blocks all writes for the duration of the command. While it doesn’t block SELECTs, this is still pretty bad and there’s a better way: CREATE INDEX CONCURRENTLY.

Don’t do this:

-- blocks all writes
CREATE INDEX items_value_idx ON items USING GIN (value jsonb_path_ops);

Instead, do this:

-- only blocks other DDL
CREATE INDEX CONCURRENTLY items_value_idx ON items USING GIN (value jsonb_path_ops);

Creating an index concurrently does have a downside. If something goes wrong it does not roll back and leaves an unfinished (“invalid”) index behind. If that happens, don’t worry, simply run DROP INDEX CONCURRENTLY items_value_idx and try to create it again.

4: Take aggressive locks as late as possible

When you need to run a command that acquires aggressive locks on a table, try to do it as late in the transaction as possible to allow queries to continue for as long as possible.

For example, if you want to completely replace the contents of a table. Don’t do this:

BEGIN;
-- reads and writes blocked from here:
TRUNCATE items;
-- long-running operation:
\COPY items FROM 'newdata.csv' WITH CSV 
COMMIT; 

Instead, load the data into a new table and then replace the old table:

BEGIN;
CREATE TABLE items_new (LIKE items INCLUDING ALL);
-- long-running operation:
\COPY items_new FROM 'newdata.csv' WITH CSV
-- reads and writes blocked from here:
DROP TABLE items;
ALTER TABLE items_new RENAME TO items;
COMMIT; 

There is one problem, we didn’t block writes from the start, and the old items table might have changed by the time we drop it. To prevent that, we can explicitly take a lock the table that blocks writes, but not reads:

BEGIN;
LOCK items IN EXCLUSIVE MODE;
...

Sometimes it’s better to take locking into your own hands.

5: Adding a primary key with minimal locking

It’s often a good idea to add a primary key to your tables. For example, when you want to use logical replication or migrate your database using Citus Warp.

Postgres makes it very easy to create a primary key using ALTER TABLE, but while the index for the primary key is being built, which can take a long time if the table is large, all queries will be blocked.

ALTER TABLE items ADD PRIMARY KEY (id); -- blocks queries for a long time

Fortunately, you can first do all the heavy lifting using CREATE UNIQUE INDEX CONCURRENTLY, and then use the unique index as a primary key, which is a fast operation.

CREATE UNIQUE INDEX CONCURRENTLY items_pk ON items (id); -- takes a long time, but doesn’t block queries
ALTER TABLE items ADD CONSTRAINT items_pk PRIMARY KEY USING INDEX items_pk;  -- blocks queries, but only very briefly

By breaking down primary key creation into two steps, it has almost not impact on the user.

6: Never VACUUM FULL

The postgres user experience can be a little surprising sometimes. While VACUUM FULL sounds like something you want to do clear the dust of your database, a more appropriate command would have been:

PLEASE FREEZE MY DATABASE FOR HOURS;

VACUUM FULL rewrites the entire table to disk, which can take hours or days, and blocks all queries while doing it. While there some valid use cases for VACUUM FULL, such as a table that used to be big, but is now small and still takes up a lot of space, it is probably not your use case.

While you should aim to tune your autovacuum settings and use indexes to make your queries fast, you may occasionally want to run VACUUM, but NOT VACUUM FULL.

7: Avoid deadlocks by ordering commands

If you’ve been using PostgreSQL for a while, chances are you’ve seen errors like:

ERROR:  deadlock detected
DETAIL:  Process 13661 waits for ShareLock on transaction 45942; blocked by process 13483.
Process 13483 waits for ShareLock on transaction 45937; blocked by process 13661.

This happens when concurrent transactions take the same locks in a different order. For example, one transaction issues the following commands.

BEGIN;
UPDATE items SET counter = counter + 1 WHERE key = 'hello'; -- grabs lock on hello
UPDATE items SET counter = counter + 1 WHERE key = 'world'; -- blocks waiting for world
END;

Simultaneously, another transaction might be issuing the same commands, but in a different order.

BEGIN
UPDATE items SET counter = counter + 1 WHERE key = 'world'; -- grabs lock on world
UPDATE items SET counter = counter + 1 WHERE key = 'hello';  -- blocks waiting for hello
END; 

If these transaction blocks run simultaneously, chances are that they get stuck waiting for each other and would never finish. Postgres will recognise this situation after a second or so and will cancel one of the transactions to let the other one finish. When this happen, you should take a look at your application to see if you can make your transactions always follow the same order. If both transactions first modify hello, then world, then the first transaction will block the second one on the hello lock before it can grab any other locks.

Share your tips!

We hope you found these tips helpful. If you have some other tips, feel free to tweet them @citusdata or on our active community of Citus users on Slack.

Hacker NewsAmateur astronomer gets 1-in-10M shot of supernova’s first light
Comments
Hacker NewsTesla Autopilot vs. GM SuperCruise, Head-To-Head
Comments
Hacker NewsMeltdown fix committed to OpenBSD
Comments
Adam Shostack & friendsThreat Modeling Privacy of Seattle Residents

On Tuesday, I spoke at the Seattle Privacy/TechnoActivism 3rd Monday meeting, and shared some initial results from the Seattle Privacy Threat Model project.

Overall, I’m happy to say that the effort has been a success, and opens up a set of possibilities.

More at the Seattle Privacy Coalition blog, “Threat Modeling the Privacy of Seattle Residents,” including slides, whitepaper and spreadsheets full of data.

Hacker NewsRobinhood Opens Cryptocurrency Trading
Comments
Hacker NewsIntroducing Airbnb Plus
Comments
Hacker NewsPass-Through Businesses Are Rethinking Their Status in Wake of Tax Law
Comments
Hacker NewsSpaceX just launched two of its space internet satellites
Comments
Hacker NewsA Global Optimization Algorithm Worth Using
Comments
Planet PostgreSQLJonathan Katz: Range Types & Recursion: How to Search Availability with PostgreSQL

One of the many reasons that PostgreSQL is fun to develop with is its robust collection of data types, such as the range type. Range types were introduced in PostgreSQL 9.2 with out-of-the-box support for numeric (integers, numerics) and temporal ranges (dates, timestamps), with infrastructure in place to create ranges of other data types (e.g. inet/cidr type ranges). Range data is found in many applications, from science to finance, and being able to efficiently compare ranges in PostgreSQL can take the onus off of applications workloads.

Hacker NewsGitlab 10.5 released
Comments
Hacker NewsUnexpected functional programming in Go
Comments
Ars TechnicaWhy states might win the net neutrality war against the FCC

Enlarge / FCC Chairman Ajit Pai listens during a Senate Appropriations Subcommittee hearing in Washington, DC, on June 20, 2017. (credit: Getty Images | Bloomberg)

Can states force Internet service providers to uphold net neutrality? That's one of the biggest unanswered questions raised by the Federal Communications Commission vote to repeal its net neutrality rules.

After the FCC vote, lawmakers in more than half of US states introduced bills to protect net neutrality in their states. The governors of five states have signed executive orders to protect net neutrality.

The major obstacle for states is that FCC Chairman Ajit Pai has claimed the authority to preempt states and municipalities from imposing laws similar to the net neutrality rules his FCC is getting rid of. ISPs that sue states to block net neutrality laws will surely seize on the FCC's repeal and preemption order.

Read 24 remaining paragraphs | Comments

Game WisdomThe Two Failing Points of Multiplayer Game Design

The Two Failing Points of Multiplayer Game Design Josh Bycer josh@game-wisdom.com

Multiplayer is one of the most commonly requested features by gamers no matter what the design, genre, or platform. When it works, it can greatly extend the life of a video game, and create fans and consumers for years. However, going all in on multiplayer without understanding the design considerations can easily sink the community.

OverwatchGameInformer 560x200 The Two Failing Points of Multiplayer Game Design

The Multiplayer Demand:

Multiplayer, as I’m sure everyone reading this knows, has become a major part of sustaining a game’s interest way beyond the release date. A fixture of the “Games as a Service” trend, multiplayer can provide near limitless replayability to a title.

The interaction between players can drive interaction and engagement without the need to keep up with constant new content. Some of the most profitable and recognizable games on the market have done so thanks to a strong multiplayer component.

Despite what fans and publishers will tell you, multiplayer is not always a sure thing, and it’s time to talk about the two major areas that designers can fail when it comes to adding multiplayer.

Scaling Up or Down:

Multiplayer Balance is a multi-faceted topic and can greatly vary based on the design in question. For our point today, I want to focus on building multiplayer out of a singleplayer experience and vice-versa.

It’s important to figure out early on what is the optimal number of players to experience your game. With that said, if you implement a singleplayer option, then your game must be balanced around a single person playing it.

Too often, developers will try to use the same exact gameplay and balance for a solo play as well as a group. The problem is that this never creates a balanced playthrough for either option. If your game is balanced just for multiplayer, then the game will be frustrating to play solo. If it’s only balanced for solo play, then group play will break the game.

As it goes, you also need to be aware that the more people required for a standard experience, the harder it’s going to be to get that set up. Despite how great it was to play Left 4 Dead versus, getting eight people together and organized proved to be difficult sometimes.

When designing single and multiplayer versions of your game, the concept of scaling is vital. Scaling will increase or decrease the challenge of the game based on the number of people playing at one time. What makes scaling critical is that it guarantees that no matter the size of the group that the experience will be balanced.

Sometimes this is not easy to do. If your game features unique mechanics explicitly designed around multiple people, you may have to think out of the box when scaling. With the game Forced, a major component of the title was shifting an energy ball around the field for puzzle solving and buffs. In the original release, you literally couldn’t play through the game due to the mechanic not working for singleplayer.

And that right there is a big blind spot developers sometimes run into: Putting multiplayer only design into the singleplayer mode.

The Infrastructure:

Having a great multiplayer mode doesn’t mean anything if the playerbase cannot play with each other. Setting up the online infrastructure for your game is an important step. For this step, I don’t have a background in networking and online architecture, so I can’t go into too much detail.

Forced 3 300x225 The Two Failing Points of Multiplayer Game Design

If you’re not careful about balance, it is possible to create a game that is unwinnable without a team

However, if you’re designing an online game, you better have someone on staff who is experienced in these matters. Depending on how important the multiplayer is to the long-term success of your game will determine the extent of the online architecture. We could go into more detail about kinds of online services, anti cheat options, and more, but that is beyond the scope of our talk for today.

You should make it as easy as possible for people to connect and play your game if you want to have any chance of building an online community. If you make it too difficult for people to connect, you will only be left with your hardcore fans, and that does not make a community.

The Community Bet:

Multiplayer games run into a very dangerous catch 22 when it comes to success. They must prove to people that their game will be popular in the long term, but if the game doesn’t have a strong launch, the game can be viewed as dead and no one new will want to join.

Multiplayer-focused games can either have a positive or negative feedback loop of growth. For every PubG, Call of Duty, Overwatch, or CS:GO, there are countless multiplayer games that failed to cultivate a community.

In order for your game to survive, you need people at all skill levels to be playing and growing the community. If all you have is a casual and mid-core base, then you won’t have hardcore players praising your game and giving the community somewhere to shoot for. However, without the casual and mid-core group, then there will be no one for new players to interact with and it will become impossible for them to start learning the game.

Teamfortress2IGN 300x188 The Two Failing Points of Multiplayer Game Design

A multiplayer game needs an active community for long term growth

Not only that, but if your game is built on PVP for progress, then no one new will be able to play your game. This is also true of games with co-op design. If the community dies, how will people be able to play through your game?

Group Dynamics:

Multiplayer is a popular option for video games with many pluses that outweigh the cons. However, it is something that you have to pay attention to when implementing in your game. It is better not to have multiplayer as opposed to having a half-ass mode that people hate. For the games that can create and keep their communities, the potential for continued support is huge.

For you reading this: Can you think of a game that was actually ruined because of its multiplayer design?

The post The Two Failing Points of Multiplayer Game Design appeared first on Game Wisdom.

roguelike developmentStoring entities in an array or a list?

I'm sure this has been asked before, only I can't find any post about this topic. The World Architecture FAQ Friday post looks at the subject kinda broadly.

I've been wondering what the best method of storing entities might be. So far I can think of 3 options. (For reference I usually use Java)

1. A List
This would be the easy method, for each level have a List<Entity>. If you want to get an entity at a specific position, loop through the list until you find an entity with the matching coordinates.

It should be fairly trivial to loop through the list if you have say 10-30 entities per level, (milliseconds?) but I can imagine the cost of searching for entities increasing with the amount of 'stuff' and things going on in the level. This method would make iterating over the entities much easier than the next method.

2. An Array
Create a 2D array of entities with the same width & height of the level, initialized with nulls. This would make getting entities a specific point very easy. Iterating would much harder as you'd have to loop over the entire array to find all entities. Also, you would need to keep track of entity movement between tiles and you would be limited to one entity per tile.

3. Why not both?
Have a 2D array and a List<Entity>, when an entity is added to the level it is added to the list and the x,y index of the array. You would use the List to iterate over the entities and the array to find entities at specific points. Though, like method 2, you would have to keep track of entity movement and be limited to one entity per tile.

Of course for a simple roguelike method 1 would be the quick and easy solution. Method 3 seems like it might take more RAM than method 1 and 2 but in terms of processing, seems the fastest.

Could method 3 be overthinking the problem or is it a valid approach to entity storage? What solutions have you guys come up with?

submitted by /u/Emmsii
[link] [comments]
Hacker NewsLiberating a X200
Comments
Hacker NewsShow HN: A List of Hacker News's Undocumented Features and Behaviors
Comments
Planet PostgreSQLJean-Jerome Schmidt: Upgrading Your Database to PostgreSQL Version 10 - What You Should Know

As more and more posts on PostgreSQL 11 appear on the web, the more outdated you may feel when using Postgres 9. Although the PostgreSQL 10 version release only happened just months ago, people are already are talking about the next version. Things are moving, so you don’t want to be left behind. In this blog we will discuss what you need to know to upgrade to the latest version, Postgres 10.

Upgrade Options

The first thing you should be aware of before you start is that there are several ways of doing the upgrade:

  1. Traditional pg_dumpall(pg_dump) / pg_restore(psql)
  2. Traditional pg_upgrade
  3. Trigger based replication (Slony, self-written)
  4. Using pglogical replication

Why is there such a variety? Because each has a different history, requiring different efforts to be set up and offering different services. Let's look closer at each of them.

Traditional Dump/Restore

pg_dump t > /tmp/f
psql -p 5433 -f /tmp/f

Traditional dump/restore takes the longest time to complete and yet it is often a popular choice for those who can afford the downtime. First, it's as easy as taking a logical backup and restoring it to a new, higher version of the database. You could say it's not an upgrade, really, as you "import" your data to a "new structure". As a result you will end up with two setups - one old (lower version) and the newly upgraded one. If the restoration process finishes without error, you are pretty much there. If not, you have to modify the existing old cluster to eliminate any errors and start the process over again.

If you use psql for import, you might also need to create some preload scripts yourself to execute on the new setup prior to migration. For example, you would want to pg_dumpall -g to get a list of needed roles to prepare in the new setup, or the opposite run pg_dump -x to skip permissions from old one. This process is pretty simple on small databases, the complexity grows with the size and complexity of your db structure and depends on what features you have setup. Basically for this method to be successful, you need to keep trying and fixing until the upgrade is successful.

The advantages to using this method include...

  • While you may spend a long time with one backup you made - the load on the old server is as small as taking one backup.
  • This method is mostly just a backup-restore sequence (potentially with some spells, songs and drumming)
  • Using this method is the oldest way to upgrade and has been verified by MANY people

When you finally complete the upgrade you either have to shut down the old server or accept some data loss (or alternatively replay the DML that happened onto the old server while restoring a backup to the new server). And the time spent doing that is relative to the size of your database.

You can, of course, start "using" a new database before restore has finished (especially before all indexes are built - often the most time it takes is for indexes). But nevertheless such downtime is often unacceptable.

Traditional pg_upgrade

MacBook-Air:~ vao$ /usr/local/Cellar/postgresql/10.2/bin/initdb -D tl0 >/tmp/suppressing_to_save_screen_space_read_it

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
MacBook-Air:~ vao$ /usr/local/Cellar/postgresql/10.2/bin/pg_upgrade -b /usr/local/Cellar/postgresql/9.5.3/bin -B /usr/local/Cellar/postgresql/10.2/bin -d t -D tl0 | tail
Creating script to delete old cluster                        ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster’s data files:
    ./delete_old_cluster.sh

Traditional pg_upgrade was created to shorten the time it takes to upgrade to a major version. Depending on the amount of relations you have it can be as fast as minutes (seconds in ridiculous cases, like one table database and hours in the "opposite cases") especially with --link argument.

The preparation sequence slightly differs from first upgrade method. In order to mock up the upgrade and thus to check if it's possible, you should build streaming replication or recover a standby server from WALs. Why is this so complicated? You want to be sure to test the upgrade on as-close-in-state-database as you had originally. "Binary" replication or PITR will help us here. After you finish the recovery and recovery_target_action = promote (PITR) or promoted the newly built slave (pg_ctl promote or place a trigger file) (streaming replication) you can then try to run pg_upgrade. Checking the pg_upgrade_internal.log will give you an idea if the process was successful or not. Furthermore, you have the same try-and-fix approach as the previous method. You save the actions taken against the test database in a script, until you successfully pg_upgrade it. In addition, you can destroy no longer needed test upgraded database, run thensaved script to prepare the original database for perform the upgrade.

The advantages to using this method include…

  • Shorter downtime than logical backup/restore
  • A neat process - pg_upgrade upgrades the original database with existing data and structure
  • Has been used used a lot in past and still would be the preference for the most DBAs running version below 9.4 (which allows using pglogical)

The disadvantages of using this method include…

  • Requires downtime

Trigger Based Replication

Assuming version 10 is on port 5433 and has the same table prepared:

db=# create server upgrade_to_10 foreign data wrapper postgres_fdw options (port '5433', dbname 'dbl0');
CREATE SERVER
Time: 9.135 ms
db=# create user mapping for vao SERVER upgrade_to_10 options (user 'vao');
CREATE USER MAPPING
Time: 8.741 ms
db=# create foreign table rl0 (pk int, t text) server upgrade_to_10 options (table_name 'r');
CREATE FOREIGN TABLE
Time: 9.358 ms

This is an extremely simplistic fn() and trigger for very basic logical replication. Such approach is so primitive it won’t work with foreign keys, but the code is short:

db=# create or replace function tf() returns trigger as $$
begin
 if TG_0P = 'INSERT' then
   insert into r10 select NEW.*;
 elseif TG_0P = 'UPDATE' then
   delete from rl0 where pk = NEW.pk;
   insert into rl0 select NEW.*;
 elseif TG_0P = 'DELETE' then
   delete from rl0 where pk = OLD.pk;
 end if;
return case when TG_0P in ('INSERT','UPDATE') then NEW else OLD end;
end;
SS language plpgsql;
CREATE FUNCTION
Time: 8.531 ms
db=# create trigger t before insert or update or delete on r for each row execute procedure tf(); CREATE TRIGGER
Time: 8.813 ms

Example:

db=# insert into r(t) select chr(g) from generate_series(70,75) g;
INSERT 0 6
Time: 12.621 ms
db=# update r set t = 'updated' where pk=2;
UPDATE 1
Time: 10.398 ms
db=# delete from r where pk=1;
DELETE 1
Time: 9.634 ms
db=# select * from r;
 pk |    t
----+---------
  3 | H
  4 | I
  5 | J
  6 | K
  2 | updated
(5 rows)

Time: 9.026 ms
db=# select * from rl0;
 pk |    t
----+---------
  3 | H
  4 | I
  5 | J
  6 | K
  2 | updated
(5 rows)

Time: 1.201 ms

Lastly, checking that we replicate to a different database:

db=# select *,current_setting('port') from dblink('upgrade.to.lO','select setting from pg_settings where name=$$port$$') as t(setting_10 text);
 setting_10 | currerrt.setting
------------+------------------
 5433       | 5432
(l row)

Time: 23.633 ms

I would call this method the most exotic. Both for the fact that with streaming replication and later with pglogical, the use of trigger based replication becomes less popular. It has a higher load on the master, increased complexity during setup and a lack of well structured documentation. There's no preparation (as such) of the process here, as you just want to setup Slony on different major versions.

The advantages of using this method include…

  • No backups need to be taken and no downtime required (especially you are behind some pgbouncer or haproxy).

The disadvantages of using this method include…

  • High Complexity of setup
  • Lack of structured documentation
  • Not very popular - less user cases to study (and share)

Along the same lines, self written trigger replication is another possible way to upgrade. While the idea is the same (you spin up a fresh higher version database and set up triggers on lower version to send modified data to it), the self written set up will be clear to you. You won't have any need for support, and thus potentially use less resources when running it. Of course, for the same reason you probably will end up with some features missing or not working as expected. If you have several tables to move to new versions, such an option will probably take you less time and, if done well, might be less resource consuming. As a bonus you can combine some ETL transformations with the upgrade, switching over to a new version without downtime.

Logical Replication with pglogical

This is a very promising new way of upgrading Postgres. The idea is to set up logical replication between different major versions and literally have a parallel, higher (or lower) version database running the same data. When you are ready, you just switch connections with your application from old to new.

The advantages of using this method include…

  • Basically no downtime
  • Extremely promising feature, much less effort than trigger based replication

The disadvantages of using this method include…

  • Still highly complex to setup (especially for older versions)
  • Lack of structured documentation
  • Not very popular - less user cases to study (and share)

Both the trigger-based and pglogical replication major version migrations can be used to downgrade the version (up to some reasonable value of course, e.g., pglogical is available from 9.4 only and trigger replication becomes harder and harder to set up as the version you want to downgrade to gets older).

Actions to be Taken Before the Upgrade

Actions to be Taken After the Upgrade

  • Consult pg_upgrade_server.log (if you used pg_upgrade)
  • Run analyze on upgraded databases (optional, as it would be done by autovacuum, but you can choose what relations should be analyzed first if you do it yourself)
  • Prewarm popular pages (optional, but could boost performance at the beginning)

Conclusion

Here are some general notes that are good to know before you decide to go to PostgreSQL version 10…

  • pg_sequences were introduced, changing the behaviour of previously popular SELECT * FROM sequence_name - now only last_value | log_cnt | is_called are returned, hiding from you "initial properties" (adjust any code that relies on changed behaviour)
  • pg_basebackup streams WAL by default. After the upgrade you might need to modify your scripts (option -x removed)
  • All pg_ctl actions are waiting for completion. Previously you had to add -w to avoid trying to connect to the database straight after pg_ctl start. Thus if you still want to use "async" start or stop, you have to explicitly mark it with -W. You might need to adjust your scripts so they behave as intended.
  • All scripts for archiving WALs or monitoring/controlling streaming replication or PITR need to be reviewed, to adjust them to changed xlog names. Eg. select * from pg_is_xlog_replay_paused() will no longer show you the state of slave WALs replay - you have to use select * from pg_is_wal_replay_paused() instead. Also cp /blah/pg_xlog/* needs to be changed to /blah/pg_wal/* and so on basically for all occurrences of pg_xlog. The reason behind such a massive, non backward compatible change is to address the case when a newby removes write-ahead-logs to "clean some space" by removing logs, and loses the database.
  • Adjust scripts using pg_stat_replication for new names (location changed to lsn)
  • Adjust queries with set returning functions if needed
  • If you used pglogical as extension before version 10, you might need to adjust pg_hba.conf changing value between "columns"
  • Adjust scripts for a new name of pg_log which is log, so something like find /pg_data/pg_log/postgresql-*  -mmin +$((60*48)) -type f -exec bash /blah/moveto.s3.sh {} \; would work. Of course you can create a symbolic link instead, but action would need to be taken to find logs in default location. Another small change to defaults is log_line_prefix - if your regular expression depended on a certain format, you need to adjust it.
  • If you were still using unencrypted passwords in your Postgres databases, this release complete removes it. So it's time to sort things out for those who relied on --unencrypted...
  • The rest of the incompatible changes with previous releases are either too fresh to be referenced in lots of code (min_parallel_relation_size) or too ancient (external tsearch2) or are too exotic (removal of floating-point timestamps support in build), so we will skip them. Of course they are listed on the release page.
  • As it was with 9.5 to 9.6, you might need to adjust your scripts for querying pg_stat_activity (one new column and new possible values)
  • If you were saving/analyzing vacuum verbose output, you might need to adjust your code
  • Also you might want to take a look at the new partitioning implementation - you might want to refactor your existing "set" to comply with new "standards"
  • check timeline (will be reset for the new database if you pg_upgrade)

Apart of these steps that you have to know to upgrade to 10, there are plenty of things that make this release a highly anticipated one. Please read the section on changes in the release notes or depesz blog.

And remember ClusterControl helps you keep track of all the patches available for your PostgreSQL database and provides the Package Summary Operational Report that shows you how many technology and security patches are available to upgrade and can even execute the upgrades for you!

Ars TechnicaApple preps new AirPods: One with hands-free Siri, one water-resistant

Enlarge

One of the smallest members of Apple's product lineup may get a useful update this year. According to a Bloomberg report, Apple is working on new models of its AirPod wireless earbuds. One could debut later this year with an updated wireless chip and another with a water-resistant design may come out in 2019.

AirPods arrived in 2016 alongside the iPhone 7 as a solution to the smartphone's lack of headphone jack. The W1 chip inside the AirPods helps it connect almost immediately to a user's Apple products. According to the report, Apple is developing a new model with a new wireless chip that helps it better manage Bluetooth connections. It's unclear if this new chip will be a variation of the W2 chip, which debuted with the Apple Watch Series 3 last year, or an entirely new one.

This year's new model may also give users voice activation for Apple's virtual assistant Siri. Currently, users must tap the side of an AirPod and then say "Hey Siri" to use voice commands. In the new AirPod models, summoning Siri would be hands-free, requiring only a voice command and no physical prompt.

Read 3 remaining paragraphs | Comments

Hacker NewsAccurate Navigation Without GPS
Comments
Hacker NewsThe reason Facebook won’t ever change
Comments
Hacker NewsStrikingly (YC W13) is hiring designers and devs in Shanghai
Comments
Hacker NewsHungry Venezuelan Workers Are Collapsing. So Is the Oil Industry
Comments
Ars TechnicaThe Cadillac CT6 review: Super Cruise is a game-changer

Enlarge (credit: Jonathan Gitlin)

Cadillac's flagship CT6 might not have the best interior in its class. It might not have the sharpest, track-honed handling. It doesn't have a butter-smooth V12 engine. It definitely doesn't have the best infotainment system. And yet, it is carrying the most exciting technology being offered in any production vehicle on sale in 2018.

Called Super Cruise, Cadillac's new tech represents the best semi-autonomous system on the market. In fact, Super Cruise is so good, I think General Motors needs to do everything it can to add it to the company's entire model range, post-haste.

You sure sound excited about this thing

Regular readers will know this isn't the first time I've written about Super Cruise. In fact, at last year's New York auto show, we awarded it an Ars Best distinction in the "Automotive Technology" field—a bold move for new technology that we had yet to actually test.

Read 32 remaining paragraphs | Comments

Hacker NewsTaiwan ban single-use plastic straws, plastic bags, disposable utensils by 2030
Comments
Ars TechnicaWatch live: SpaceX aims to launch a satellite and catch a payload fairing

Enlarge / SpaceX has a sooty booster on the pad in California, ready for a launch. (credit: SpaceX)

SpaceX had to scrub the Wednesday launch attempt of its Falcon 9 rocket due to upper-level winds, but will try again Thursday morning. The instantaneous launch window opens (and closes) again at 9:17am ET. This launch will occur from at Vandenberg Air Force Base, in Southern California.

There is heightened interest in this launch because, for the first time, SpaceX will attempt to "catch" one of the two payload fairings that enclose the satellite at the top of the rocket. The value of these fairings is about $6 million, and recovering and reusing them would both save SpaceX money and remove another roadblock on their production line for Falcon 9 rockets.

These fairings will separate from the rocket at about three minutes after launch and are "steerable" in the sense that SpaceX hopes to guide them back to a target location the ocean. The company has been mum about how it plans to slow the fairings and collect them as they fall to Earth. However, as part of that recovery effort, SpaceX will dispatch a boat named "Mr. Steven" into the Pacific Ocean. Photos of the boat, which has a large net above it, have popped up on social media in recent weeks. Presumably the company will share more information if the recovery is a success.

Read 3 remaining paragraphs | Comments

Hacker NewsIOTA: The Brave Little Toaster That Couldn’t
Comments
TomDispatchTomgram: Rebecca Gordon, America's Wars, A Generational Struggle (in the Classroom)

I was teaching the day the airplanes hit the World Trade Center...

Ars TechnicaOne-stop counterfeit certificate shops. For all your malware-signing needs

A digital signature used by malware that infected the network of Kaspersky Lab in 2014. Counterfeit certificates that generate such fraudulent signatures are being sold online for use in other malware. (credit: Kaspersky Lab)

The Stuxnet worm that targeted Iran's nuclear program almost a decade ago was a watershed piece of malware for a variety of reasons. Chief among them, its use of cryptographic certificates belonging to legitimate companies to falsely vouch for the trustworthiness of the malware. Last year, we learned that fraudulently signed malware was more widespread than previously believed. On Thursday, researchers unveiled one possible reason: underground services that since 2011 have sold counterfeit signing credentials that are unique to each buyer.

In many cases, the certificates are required to install software on Windows and macOS computers, while in others, they prevent the OSes from displaying warnings that the software comes from an untrusted developer. The certificates also increase the chances that antivirus programs won't flag previously unseen files as malicious. A report published by threat intelligence provider Recorded Future said that starting last year, researchers saw a sudden increase in fraudulent certificates issued by browser- and operating system-trusted providers that were being used to sign malicious wares. The spike drove Recorded Future researchers to investigate the cause. What they found was surprising.

"Contrary to a common belief that the security certificates circulating in the criminal underground are stolen from legitimate owners prior to being used in nefarious
campaigns, we confirmed with a high degree of certainty that the certificates are created for a specific buyer per request only and are registered using stolen corporate identities, making traditional network security appliances less effective," Andrei Barysevich, a researcher at Recorded Future, reported.

Read 8 remaining paragraphs | Comments

Charles StrossNew publication dates (and audiobook news)

So, a brief update about "The Labyrinth Index" and British audiobook editions of the Laundry Files!

Firstly, "The Labyrinth Index" will now be published on October 30th in both the US and UK, not in July as previously scheduled. (It takes time to turn a manuscript into a book—copy-editing, typesetting, checking proofs, running the printing press, distributing crates of books toshops—and due to a cascading series of delays (that started with me not deciding to write it until after my normal 2018 novel deadline had passed) we had to add three months to the production timeline.) On the other hand, the manuscript has been delivered and should be with the copy editor real soon now, so it's on the way.

Secondly, some unexpected good news for those of you in the UK, EU, Australia and NZ who like audiobooks: "The Fuller Memorandum" and "The Apocalypse Codex" are getting audio releases and are due out on May 24th!

This has been a sore spot for years. Recording audiobooks is expensive and the British audiobook market is a lot smaller than the North American one. The Laundry Files have been released in audio since book five, "The Rhesus Chart", and a couple of years ago Orbit worked with the RNIB to release the first two books in the series, but books 3 and 4 were missing—back-list titles that were uneconomical to record (and the US audio publisher wanted too much money for a license to re-use their recording).

Anyway, it looks as if the growing market for audiobooks and the growing sales of the Laundry Files have finally intersected, making it possible for Orbit to justify paying for an audio release of the missing titles, and you'll be able to listen to the entire series.

roguelike developmentVery nice an detailed article about original Rogue source with lots of step by step explanations. Great!
Very nice an detailed article about original Rogue source with lots of step by step explanations. Great! submitted by /u/bleuge
[link] [comments]
Hacker NewsList of 4000+ FinTech Startups and Companies
Comments
Hacker NewsShow HN: Headless Chrome Crawler
Comments
Hacker NewsPreparing for the "Malicious Uses of AI” by OpenAI, FHI, EFF, Cambridge, +10
Comments
Hacker NewsWhat’s So Dangerous About Jordan Peterson?
Comments
Hacker NewsLively (YC W17) Is Hiring a Sr. Demand Generation Specialist
Comments
Hacker NewsConservative GC: Is It Really That Bad?
Comments
Hacker NewsTwitter is (finally) cracking down on bots
Comments
Hacker NewsWhen baby monitors fail to be smart
Comments
Ars TechnicaMan removes feds’ spy cam, they demand it back, he refuses and sues

Enlarge / U.S. Border Patrol supervisor Eugenio Rodriguez surveys the Rio Grande River and surrounding environs August 7, 2008 in Laredo, Texas. (credit: John Moore / Getty Images)

Last November, a 74-year-old rancher and attorney was walking around his ranch just south of Encinal, Texas, when he happened upon a small portable camera strapped approximately eight feet high onto a mesquite tree near his son's home. The camera was encased in green plastic and had a transmitting antenna.

Not knowing what it was or how it got there, Ricardo Palacios removed it.

Soon after, Palacios received phone calls from Customs and Border Protection officials and the Texas Rangers. Each agency claimed the camera as their own and demanded that it be returned. Palacios refused, and they threatened him with arrest.

Read 26 remaining paragraphs | Comments

roguelike developmentWhy we decided to make a roguelike
Why we decided to make a roguelike submitted by /u/HotPepperSouce
[link] [comments]
Hacker NewsSnips Uses Rust to Build an Embedded Voice Assistant
Comments
Hacker NewsCelebrating 50 years of an unforgettable tale of the desert
Comments
Hacker NewsComputer History Museum 2018 Fellow Award – Guido van Rossum
Comments
Hacker NewsTool 'names and shames' hidden drug trials
Comments
jwzA Portrait of the Artist as a Pelvic Thrusting Rotisserie Chicken
Ben Ahles:

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Hacker NewsTetraScience (YC S15) Is Hiring Senior Software Engineers to Make Science Better
Comments
Hacker NewsElectron 2.0.0 beta
Comments
jwzTimezones
Hey Lazyweb! Want to hear something batshit insane about the iCal format? Sure you do!

(I write these things down in the hope that doing so will cause some unfortunate soul in the future to waste less time learning this than I just did. "Today I learned something new and stupid!")

When you're exporting an event as an ICS file, so that someone can add it to their Apple or Google or Outlook calendar, you have to say what time the event starts.

Now some people well tell you, "just specify all your dates in UTC, that's easy and unambiguous." But here's what happens if you do that:

You have an event happening in San Francisco on Feb 21 at 8pm. You specify the time in UTC (DTSTART:). When someone adds that event to their calendar, the little box shows up in the calendar grid at 8pm just as it should. But then they double-click on it, and the text says: "Feb 22, 2018, 4AM to 5AM (GMT)".

While technically true, this is not a useful or convenient thing to say to an actual meat-based human who is expecting to attend that event.

So instead the sane thing is to specify the time in the local time zone where the event is happening: 8pm Pacific (DTSTART;). The box still shows up in the calendar in the same place, but now when you click on it, it says the much more sensible "Feb 21, 2018, 8PM to 9PM (PST)". It even says that if your local time zone is something else. So if you're in New York and planning a trip to San Francisco, it's obvious to you that the event in SF starts at 8pm local time, not 11pm local time (which is what the clock would say in New York).

So far so good, right?

Oh, except that's not a valid ICS file.

Why? Because the TZID doesn't refer to the name of a time zone from the IANA Zoneinfo / Olson Database, which has been the gold standard historical record of such things since just about The Epoch. (This database is an absolute treasure, documenting and clarifying an ongoing global political and technical clusterfuck of nightmarish proportions. The work that went into this thing is staggering. It is a treasure of nightmares. But I digress.)

No, see, the iCalendar spec wants each ICS file to fully document its own notion of what the Daylight Savings Time rules are! You can't just say "Look, it's US/Pacific, ok? When is Daylight Savings Time? March? I dunno, whatever, do the right thing!" Instead you have to specify the switch-dates explicitly!

Here's what Apple does:

BEGIN:VTIMEZONE
  TZID:America/Los_Angeles
  BEGIN:DAYLIGHT
    TZOFFSETFROM:-0800
    RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
    DTSTART:20070311T020000
    TZNAME:PDT
    TZOFFSETTO:-0700
  END:DAYLIGHT
  BEGIN:STANDARD
    TZOFFSETFROM:-0700
    RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
    DTSTART:20071104T020000
    TZNAME:PST
    TZOFFSETTO:-0800
  END:STANDARD
END:VTIMEZONE

What that says is: "The definition of the America/Los_Angeles time zone is that as of 2007, it is UTC minus 7 hours from the 2nd Sunday in March through the 1st Sunday in November, and is UTC minus 8 hours otherwise."

This is, of course, false, because the rules were different before 2007. Because reasons.

So while it may work for this particular calendar entry, past ones have to be different. And let's say you put a far future event in your calendar -- the ten year anniversary of something or other -- and in the intervening decade, your country's bureaucrats change the Daylight Savings rules again. Oops, now it's wrong. You're still in US/Pacific, but the ICS file doesn't know that what that means has changed.

Because the authors of the ICS standard felt that indirecting through the IANA rules was a terrible idea for some reason.

Here's what the full set of US/Pacific timezone rules look like, all the way back to when the whole idiotic idea of Daylight Savings Time took hold, through today. It will definitely change again. The Olson database is constantly being updated. That's the whole point of it!

BEGIN:

(I didn't build that by hand; I coaxed Apple to spit it out by creating and exporting an event in the 1800s.)

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Hacker NewsWhat to care about in a job
Comments
Hacker NewsSaaS Conversion Rates from Free Trial
Comments
Hacker NewsProcess of Elimination
Comments
Hacker NewsProgramming ARM μC in Rust at four different levels of abstraction
Comments
Hacker NewsI don’t understand Graph Theory
Comments
Hacker NewsDo not use NPM 5.7
Comments
Hacker NewsThe Boys Are Not All Right
Comments
Hacker NewsNo, You Probably Don't Need a Blockchain
Comments
Hacker NewsNetworked Physics in Virtual Reality
Comments
Les CrisesMiscellanées du 22/01

I. Les éconoclastes

Lire la suite

Hacker NewsGoogle Pay app for Android
Comments
Hacker NewsAnthropocene began in 1965, according to signs left in world’s ‘loneliest tree’
Comments
Les Crises[Vidéo] Ces entreprises qui ont versé 1 000 milliards de dividendes à leurs actionnaires, par Olivier Berruyer

Source : Boursorama, Olivier Berruyer,

Ils repartent en flèche. Les dividendes versés par les plus grandes entreprises mondiales à leurs actionnaires ont augmenté de 7,7 %. Alors qui sont les entreprises qui gâtent le plus leurs actionnaires ? Pourquoi ont-elles décidé d’être aussi généreuses ? Est-ce que cette envolée des dividendes cachent une certaine frilosité ? Analyse d’un phénomène avec Olivier Berruyer, auteur du blog Les Crises. Ecorama du 20 février présenté par Jérôme Libeskind sur boursorama.com

Source : Boursorama, Olivier Berruyer,

Lire la suite

QC RSSIt's Motherfucking Sniffle Time

oops I made Marigold cuter

I will be at ECCC next weekend in Seattle! You should come say hi!

jwzInside The Federal Bureau Of Way Too Many Guns
By law, the system must remain intricate, thorny, and all but impenetrable.

"I get e-mails even from police saying, 'Can you type in the serial number and tell me who the gun is registered to?' Every week. They think it's like a VIN on a car. Even police. Police from everywhere. 'Hey, can you guys hurry up and type that number in?'" [...]

That's been a federal law, thanks to the NRA, since 1986: No searchable database of America's gun owners. So people here have to use paper, sort through enormous stacks of forms and record books that gun stores are required to keep and to eventually turn over to the feds when requested. It's kind of like a library in the old days -- but without the card catalog. They can use pictures of paper, like microfilm (they recently got the go-ahead to convert the microfilm to PDFs), as long as the pictures of paper are not searchable. You have to flip through and read. No searching by gun owner. No searching by name.

"You want to see the loading dock?" We head down a corridor lined with boxes. Every corridor in the whole place is lined with boxes, boxes up to the eyeballs. In the loading dock, there's a forklift beeping, bringing in more boxes. [...] Almost 2 million new gun records every month he has to figure out what to do with. Almost 2 million slips of paper that record the sale of a gun -- who bought it and where -- like a glorified receipt. If you take pictures of the gun records, you can save space. [...]

"These were Hurricane Katrina," he says, leaning against a stack. "They were all submerged. They came in wet. And then we dried them in the parking lot. When they got dry enough, the ladies ran them into the imager.

"Do you want to see the imagers? I'll show you. Imaging is like running a copy machine. So, like, if there's staples? So what these ladies along here do, from this wall to this wall, from six in the morning until midnight... staples." [...]

The vast majority of the gun records linking a gun to its owner are kept back at the various licensed dealers, the Walmarts, Bob's Gun Shops, and Guns R Us stores dotting America's landscape.

We have more gun retailers in America than we do supermarkets, more than 55,000 of them. We're talking nearly four times the number of McDonald's. Nobody knows how many guns that equals, but in 2013, U.S. gun manufacturers rolled out 10,844,792 guns, and we imported an additional 5,539,539. The numbers were equally astounding the year before, and the year before that, and the year before that. [...]

Serial numbers, it turns out, are tangled clogs of hell. Half the time what the cop is reading you is the patent number, not the serial number, or it's the ID of the importer, and then you have the "zero versus letter O" problem, the "numeral 1 versus letter l versus letter small-cap I" problem, and then there is the matter of all the guns with duplicate serial numbers [...]

Step Two: Hester calls the manufacturer (if it's a U.S.-made gun) or the importer (for foreign-made guns). He wants to know which wholesaler the gunmaker sold the weapon to. [...]

Step Three: You call the wholesaler and say, "Who did you sell it to?" The wholesaler, who also has to keep such records, goes through the same rigmarole the importer or manufacturer did, and he gives you the name of the gun store that ordered it from him. Let's say it was Walmart.

Step Four: If the Walmart is still in business, you call it. The actual store. Not corporate headquarters, or some warehouse, but the actual Walmart in Omaha or Miami or Wheeling. You call that store and you say, "To whom did you sell this Taurus PT 92 with this particular serial number on it?" By law, every gun dealer in America has to keep a "bound book" or an "orderly arrangement of loose-leaf pages" (some have been known to use toilet paper in protest) to record every firearm's manufacturer or importer, model, serial number, type, caliber or gauge, date received, date of sale. This record corresponds to the store's stack of 4473s, which some clerk has to go dig through in order to read you the information from the form. Or he can fax it. Congratulations. You have found your gun owner. [...]

There is no other place in America where technological advances are against the law. Unless you count the Amish. Even if a gun store that has gone out of business hands over records that it had kept on computer files, Charlie can't use them. He has to have the files printed out, and then the ladies take pictures of them and store them that way. Anything that allows people to search by name is verboten.

Previously, previously, previously.

Les CrisesRobert Parry, « Le don de soi », par Ray McGovern

Source : raymcgovern.com, 28-01-2018

Un autre Robert – le poète Frost – a déclamé ce poème lors du discours d’investiture de John F. Kennedy : « Tels que nous étions, nous nous sommes dévoués corps et âmes ».

Les mots s’appliquent bien à Robert Parry, qui est décédé la nuit dernière. Vraiment, tel qu’il était, il se donna sans retenue – en tant que journaliste accompli. Robert Frost a été récompensé par quatre Pulitzer pour la poésie ; Robert Parry en méritait au moins quatre pour le journalisme.

Curieusement, dans les milieux journalistiques corrompus d’aujourd’hui, son adhésion intransigeante aux normes professionnelles a marginalisé Robert Parry et fait de lui un paria plutôt qu’une personne honorée. Mais Frost et Parry avaient en commun une individualité incorruptible, imperméable aux compromis. Robert Parry a vu des hommes et des femmes de moindre caractère y succomber au cours des trois dernières décennies.

Médicalement, un troisième AVC en quatre semaines était la cause immédiate. Mais qu’est-ce qui a amené les attaques ? Bob était assez lucide pour partager avec nous ses propres idées concernant la cause sous-jacente, dans une apologie qu’il eut du mal à rédiger après sa première attaque le soir de Noël.

Lire la suite

Hacker NewsRoses are red, Violets are blue, Working Atrium is sweet, and we want to hire YOU
Comments
Hacker NewsUsing a laser to wirelessly charge a smartphone safely across a room
Comments
Hacker NewsTwitter purges accounts, and conservatives cry foul
Comments
Hacker NewsNobody Wants to Let Google Win the War for Maps All Over Again
Comments
Hacker NewsReproducible, Verifiable, Verified Builds
Comments
Hacker NewsWhy I Collapsed on the Job
Comments
Hacker NewsAutomation and the Use of Multiple Accounts
Comments
Hacker NewsThe Secret Industry of Guys Writing Wedding Speeches for Other Guys
Comments
Hacker NewsAnti-depressants: Major study finds they work
Comments
Hacker NewsBijlmer: City of the Future, Part 1
Comments