About

Feeds

 

〰 Tidal, Archiloque's feed reader

roguelike developmentPyGame vs BearLibTerminal

I started following these great YouTube tutorial series for my roguelike - https://www.youtube.com/playlist?list=PLKUel_nHsTQ1yX7tQxR_SQRdcOFyXfNAb

PyGame is used for graphics rendering and multiple tiles. However I see that BearLibTerminal is recommended a lot in here. Which library is better? What are the pros and cons of each of them?

submitted by /u/Ziik_bg
[link] [comments]
Hacker NewsGoogle's response to the European Commission fine
Comments
Planet IntertwinglyAI first—with UX
An AI-first strategy will only work if it puts the user first.The big news from Google's I/O conference was Google's "AI-first" strategy. This isn't entirely new: Sundar Pichai has been talking about AI first since last year. But what exactly does AI first mean? In a Quora response, Peter Norvig explains Google's "AI first" direction by saying that it's a transition from information retrieval to informing and assisting users. Google's future isn't about enabling people to look things up; it's about anticipating our needs, and helping us with them. He realizes the big problem: For example, Google Now telling you it is time to leave for an appointment, or that you are now at the grocery store and previously you asked to be reminded to buy milk. Assisting means helping you to actually carry out actions—planning a trip, booking reservations; anything you can do on the internet, Google should be able to assist you in doing. With information retrieval, anything over 80% recall and precision is pretty good—not every suggestion has to be perfect, since the user can ignore the bad suggestions. With assistance, there is a much higher barrier. You wouldn't use a service that booked the wrong reservation 20% of the time, or even 2% of the time. So, an assistant needs to be much more accurate, and thus more intelligent, more aware of the situation. That's what we call "AI-first." All applications aren't equal, of course, and neither are all failure rates. A 2% error rate in an autonomous vehicle isn't the same as a map application that gives a sub-optimal route 2% of the time. I'd be willing to bet that Google Maps gives me sub-optimal routes at least 2% of the time, and I never notice it. Would you? And I've spent enough time convincing human travel agents (remember them?) that they booked my flight out of the wrong airport that I think a 2% error rate on reservations would be just fine. The most important part of an assistive, AI-first strategy, as Pichai and many other Google executives have said, is "context." As Norvig says, it's magic when it tells you to leave early for an appointment because traffic is bad. But there are also some amazing things it can't do. If I have a doctor's appointment scheduled at 10 a.m., Google Calendar can't prevent someone from scheduling a phone call from 9 to 10 a.m., even though it presumably knows that I need time to drive to my appointment. Furthermore, Google Now's traffic prediction only works if I put the address in my calendar. It doesn't say "Oh, if the calendar just says 'doctor,' he drives to this location." (Even though my doctor, and his address, are in Google Contacts. And even though my phone knows where it is, either though GPS or triangulation of cell tower signals.) And I'm lazy: I don't want to fill that stuff in by hand whenever I add an appointment. I just want to say "doctor." If I have to put more detail into my calendar to enable AI assistance, that's a net loss. I don't want to spend time curating my calendar so an AI can assist me. That's context. It doesn't strike me as a particularly difficult AI problem; it might not be an AI problem at all, just some simple rules. Is the car moving before the appointment, and does it stop moving just before the appointment starts? Who is the appointment with, and can that be correlated with data in Google contacts? Can the virtual assistant conclude that it should reserve some travel time around appointments with "doctor" or "piano lesson"? The problem of context is really a problem with user experience, or UX. What's the experience I want in being "assisted"? How is that experience designed? A design that requires me to expend more effort to take advantage of the assistant's capabilities is a step backward. The design problem becomes more complex when we think about how assistance is delivered. Norvig's "reminders" are frequently delivered in the form of asynchronous notifications. That's a problem: with many applications running on every device, users are subjected to a constant cacophony of notifications. Will AI be smart enough to know what notifications are actually wanted, and which are just annoyances? A reminder to buy milk? That's one thing. But on any day, there are probably a dozen or so things I need, or could possibly use, if I have time to go to the store. You and I probably don't want reminders about all of them. And when do we want these reminders? When we're driving by a supermarket, on the way to the aforementioned doctor's appointment? Or would it just order it from Amazon? If so, does it need your permission? Those are all UX questions, not AI questions. And let's take it a step further. How does the AI know that I need milk? Presumably because I have a smart, internet-enabled refrigerator—and I may be one of the few people who have never scoffed at the idea of a smart refrigerator. But how does the refrigerator actually know I need milk? It could have a bar code reader that keeps track of the inventory, and shelves with scales so it knows how much milk is in the carton. Again, the hard questions are about UX, not AI. That refrigerator could be built now, with no exotic technology. Can users be trained to use the bar code reader and the special shelves? That's what Google needs to be thinking about. Changing my experience of using the refrigerator might be a fair tradeoff for the inconvenience of running out of milk—and trivial conveniences are frequently what make great user experience. But someone who really understands users needs to think seriously about how to make the tradeoff as minimal as possible. (And I admit, the answer might turn it back into an AI problem.) If that tradeoff isn't made correctly, the AI-and-internet-enabled smart refrigerator will end up being just another device with a lot of fancy features that nobody uses. I haven't touched on privacy, which is certainly a UX issue (and which most of my suggestions would throw out the window). Or security, which isn't considered a UX issue often enough. Or any of a dozen problems that involve thinking through what users really want, and how they want to experience the application. AI-first is a smart strategy for Google, but only if they remember AI's limitations. Pichai is right to say that AI is all about context. In a future where humans and computers are increasingly in the loop together, understanding context is essential. But the context problem isn't solved by more AI. The context is the user experience. What we really need to understand, and what we've been learning all too slowly for the past 30 years, is that technology is the easy part. Designing systems around the users' needs is hard. But it's not just difficult: it's also all-important. AI first will only be a step forward if it puts the user first. Continue reading AI first—with UX.
Charles StrossHow to get a signed copy of The Delirium Brief

The Delirium Brief The Delirium Brief

So The Delirium Brief is imminent! It officially goes on sale on July 13th in the UK and (in a different binding, from a different publisher) on July 11th in the USA.

I'll be doing my usual launch reading/signing event in Edinburgh at Blackwell's Bookshop on Wednesday July 12th — it's a ticketed event but tickets are free.

You can order signed copies of The Delirium Brief both from Blackwell's (see the bottom of the linked event page for details) and from my favourite local specialist SF bookstore, Transreal Fiction in Edinburgh. (Transreal takes Paypal and can ship overseas; Mike can also provide signed copies of many of my other books upon request.)

Planet IntertwinglyHow to get a signed copy of The Delirium Brief
So The Delirium Brief is imminent! It officially goes on sale on July 13th in the UK and (in a different binding, from a different publisher) on July 11th in the USA. I'll be doing my usual launch reading/signing event in Edinburgh at Blackwell's Bookshop on Wednesday July 12th — it's a ticketed event but tickets are free. You can order signed copies of The Delirium Brief both from Blackwell's (see the bottom of the linked event page for details) and from my favourite local specialist SF bookstore, Transreal Fiction in Edinburgh. (Transreal takes Paypal and can ship overseas; Mike can also provide signed copies of many of my other books upon request.)
Planet IntertwinglyData in its natural format
Integrate and access any form of data using a multi-model database.Each database management system, relational or non-relational, has its strengths and weaknesses. Often, application developers or database designers choose the database system they know or like the best, and fit the data into that system—whether it’s suitable or not. In large organizations, this process of choosing a database system, building to it, and wrangling data to fit has often been repeated over and over. An enterprise may have dozens or even hundreds of data stores created by different departments, different business units, or by companies that have been acquired. When decisions-makers need to combine that data for analysis, a complex and often brittle extract-transform-load (ETL) process is done to move relevant data into an analytics data store, a spreadsheet, a report, or a presentation. Without its original context and format, the extracted data is not as meaningful as it should be. Instead of fitting the data to the database, what if we fit the database to the data? Instead of connecting silos, what if we just got rid of them altogether? Database models “Model” refers to how a database stores and organizes the data it has been given. This has to do with the high-level conceptual organization, and how one interacts with the data via query language—not precisely how the data is physically stored; databases with nearly identical models might have very different physical implementations. The relational model The conventional database model is relational—the SQL databases many of us are familiar with. Data is stored in tables, where each row is an item in a collection, and each column represents a particular attribute of the items. Well-designed relational databases conventionally follow third normal form (3NF), in which no data is duplicated in the system. To accomplish this, references to any entity are made using foreign keys. So, for example, in an ecommerce system with an orders table and a customer table, the row referring to a single order will reference the ID of the customer, instead of storing the customer’s information directly. Here, we already see one of the problems with relational databases: that hypothetical order table doesn’t have the customer information, so a join is needed to pull that in. Each customer might have many addresses, stored on a customer address table. The order table can’t store a list of items in the order; that information is on the order items table. But the order items table only references product keys, so getting details like the name of the product requires looking at the products table. So, for something as simple as viewing order details, we have to join across at least five tables. This deconstruction of information into the relational model has advantages: With a homogenous data set, it is highly space efficient. The data integrity of 3NF ensures consistency about what is considered “true” in the system. A well-designed schema, for a domain where such a thing can exist, often reveals the shape of the domain and can guide application logic. There are disadvantages, too: Data integrity requires a high degree of dependency across tables. This can cause scaling problems because sharding across multiple machines may physically separate data that has to be joined to be useful. Relational data schemas are complex and hard to change, and have to be designed and implemented before data is added to the system. If you can’t predict what the incoming data will look like, it is impossible to create a meaningful schema for it. There is a mismatch between how data exists in the world, how it is stored in the database, and how it is used by any consuming application. This can cause a loss of context and detail, making the data less meaningful. Non-relational models Over the last decade or so, a number of new database models have emerged as engineers have tried to compensate for the disadvantages of the relational model, or needed to solve new sets of problems created by the size and scope of the internet. Collectively, these are called NoSQL databases. Document databases Document databases store complete, usually self-contained representations of entities. In a document-centered application—which includes anything from web content to medical record keeping—this can be a very natural approach to storing data. For this reason, and because they are conceptually easy to understand and use, the most popular NoSQL databases are document oriented. Graph databases A graph is a web of nodes and the connections (called “edges”) between them. The connectivity of the internet, the web, and social media has provided a number of important uses for graph databases, as they are the most natural way to model the network of relationships among a collection of entities. A subset of graph databases is the semantic triple store, which is more concerned with storing relational facts among dissimilar entities. So, while a relational graph might show you who is friends with whom, a triple store will tell you which part belongs to which subassembly. Key-value store A key-value store is, in essence, an associative array or dictionary. Because of their relative simplicity, they allow very fast writes. Reading out is very efficient, too, as long as you already know the key you are looking for. Searching and querying by the values is typically difficult. Problems of multiple models in the enterprise In a very large organization, where different business units and departments make engineering decisions to meet their own needs, a diversity of models will prevail. This is even more true if companies have been acquired over time. Also, business longevity means that legacy databases will often be in place. Different models for different applications makes perfect sense, as no single model is perfect for all situations. A system managing financial transactions needs absolute consistency, which usually means a relational database and third normal form. An enterprise knowledge management or customer records system might need a document-oriented system. Even if everyone in an organization used only relational databases, the differing schemas created to solve particular problems still create difficulty. Silos and how they get connected When the order data is the sales database, but the details that led up to the sale are in the CRM system, and the information about how that customer first connected with your company is in the marketing database, and their previous order information is in a database from a company you acquired, you have a problem. It becomes impossible to get a holistic view of any aspect of your business. There are a few common ways organizations try to overcome this gap. Report building Some analyst or team of analysts manually queries the various data stores and loads that information into a spreadsheet. This aggregated data becomes the basis for a report which is then read by executives. Problems with this approach: The new information created by building the report cannot stay up to date. The process has to be repeated any time you want to see new information. The copy-paste-manipulate process has a lot of opportunities for human error and bad judgement. It is extremely time consuming. Reports can only answer questions that report builders thought to ask ahead of building the report. Genuinely new insights that might come from being able to “play with” the data are not available. Data aggregation One step up from manually aggregating data into a report, is aggregating it into a new database, typically a relational database with a schema designed to answer particular types of analysis questions. Scripts are written to ETL data out of the other systems and into the new one, and then they’re scheduled to run every day or hour. Problems with this approach: Defining a schema that will work well for data coming from a variety of sources can be very difficult. The ETL process robs data of its original context and format, which may affect its meaningfulness. Like the reports, the new database will often only be able to answer the types of questions that were thought of when designing the schema. Changes in the way data is organized in the originating databases will require changes to the aggregating database. Multiple models, one database A multi-model database can solve these problems by providing a single storage solution for all the different types of data an organization handles. What is a multi-model database? A multi-model database supports multiple data models in their native form within a single, integrated back end, and uses data standards and query standards appropriate to each model. Queries are extended or combined to provide seamless queries across all the supported data models. Indexing, parsing, and processing standards appropriate to the data model are included in the core database product. A multi-model database has relational storage, document storage, graph storage, and key-value storage together in one place. Data does not need to be transformed before it is loaded, and a relational schema does not need to be designed before data is brought in. This allows querying across data of different types, without losing the context and formatting that often gives meaning to individual pieces of information. Why use a multi-model database? A multi-model approach allows you to keep data in the form the data wants to be in, it minimizes or eliminates the need to transform data to fit new formats, and it allows you to consume data before knowing how it will be organized. We already live in a multi-data model world. A multi-model database system brings that world view into a single application. To learn more about how to reduce complexity and shorten time-to-value using a multi-model database, download our free report Building on Multi-Model Databases. This post is a collaboration between O’Reilly and MarkLogic. See our statement of editorial independence. Continue reading Data in its natural format.
Ars TechnicaGoogle must stop demoting competitors in search results, EU rules

Enlarge (credit: John Thys/AFP/Getty Images)

Google has been gut-punched by the European Commission for abusing its search monopoly to squeeze out other players on the Web. Europe's competition commissioner, Margrethe Vestager, had been expected to hit Google with a fine of around €1 billion, but the actual number is far larger: €2.42 billion, the largest anti-monopoly fine ever issued.

In addition to the fine, Google will be required to change its search algorithm so that every competing service is fairly crawled, indexed, ranked, and displayed. If Google fails to remedy its anti-competitive conduct within 90 days it will face daily penalty payments of up to 5 percent of the daily worldwide turnover of Google's parent company Alphabet. The commission's full statement on the decision makes for quite damning reading.

Google, as reported by the AFP news agency, "respectfully disagrees" with the EU's fine and is considering an appeal. We have asked Google for comment and will update this story when it responds.

Read 5 remaining paragraphs | Comments

Planet IntertwinglyTool Updated: CentOS Download Plug-in R2 and CentOS Download Cacher R2 v1.0.0.2 published 2017-06-27
The CentOS Download Plug-in R2 tool and CentOS Download Cacher R2 have been updated with the following enhancements:   CentOS Download Plug-in R2 1.0.0.2 The CentOS Download Plug-in R2 can now use packages that are cached by the CentOSR2DownloadCacher's download_dir (which is referred to as localCache in the plugin.ini) and get packages from the Internet at the same time. Previously, the CentOSR2Plugin was used in the BigFix Server in either of the following scenarios: fully air-gapped using CentOSR2DownloadCacher or internet-enabled without using CentOSR2DownloadCacher. With this enhancement, you can cache the packages offline to save time downloading the packages during a patching cycle.   CentOS Download Cacher R2 1.0.0.2 Package sha1 download support The CentOS Download Cacher R2 can now download packages as sha1 files instead of RPM. Previously, when using the “buildRepo –key centos-7-x64” with the download cacher, the CentOS repository “centos-7-x64” structure is mirrored offline. This might result to duplication of packages if they are found in multiple repositories. Using the --sha1_download_dir will download all packages from all repositories (keys) as files with a sha1 filename into a single flat directory. Only the packages will be stored in the sha1_download_dir. Each repository metadata will still be stored in the download_dir and will still maintain the CentOS Repository directory structure. Note: When using --sha1_download_dir, consider the cache limit of the BigFix server's sha1 file folder.   Repository access check New commands to verify if you have access to the BigFix supported CentOS base repositories and sub-repositories: check-baserepos and check-allrepos   Storage space requirement check New command to calculate and check the storage space requirement when using the builRepo command: check-storagereq This command outputs the required space to download the repository metadata and packages with and without the use of the --sha1_download_dir option.   Space-saving benchmarks Space-saving benchmarks have been established with the use of the --sha1_download_dir option. Using the --sha1_download_dir option have shown significant decrease in storage size, download size, and time when caching multiple repositories of the same CentOS version. This is because many packages are duplicated among repositories with the same CentOS version (for example, centos-6.8-x64, centos-6.7-x64, centos-6.6-x64). Space is not saved if you only cache a single repository for each CentOS version (for example, centos-6.8-x64, centos-7.1-x64).   Updated Tools Versions: CentOS Download Plug-in R2, version 1.0.0.2 CentOS Download Cacher R2, version 1.0.0.2   Actions to Take: Update to CentOS Download Plug-in version 1.0.0.2 by using the Manage Download Plug-ins dashboard. Download the latest CentOS Download Cacher R2 from the BigFix Support site: For Windows systems, download the tool at http://software.bigfix.com/download/bes/util/CentOSR2DownloadCacher.exe. For Linux systems, download the tool at http://software.bigfix.com/download/bes/util/CentOSR2DownloadCacher-linux.tar.gz. This tool is supported on x86-64 (64-bit) systems.   Published Site Version: Patching Support, version 765   Additional Resources: For more information about the new features, see BigFix Patch for CentOS User Guide at https://ibm.biz/Bdi3k3.   Application Engineering Team IBM BigFix  
Hacker NewsGoogle Gets Record $2.7B EU Fine for Skewing Searches
Comments
Planet IntertwinglyFour short links: 27 June 2017
Reading Papers, AR Kit, Demoing to Sell, and Secure Go How to Read a Scientific Paper: A Guide for Non-Scientists -- excellent advice. 1. Read the introduction (not the abstract). 2. Identify the BIG QUESTION. 3. Summarize the background in five sentences. 4. Identify the SPECIFIC QUESTION(S). 5. Identify the approach. 6. Draw a diagram for experiments, showing methods. 7. Summarize results from each experiment. 8. Do results answer the SPECIFIC QUESTION(S). 9. Read conclusion/discussion/interpretation section. 10. Now, read the abstract. 11. What do other researchers say about this paper? Made with ARKit -- selections of demos made with Apple's augmented reality framework. I may be shallow, but I'm excited to have this in my hands. The ruler made me go "wow." Everything I Wish I'd Known Before I Started Demoing SaaS -- the sales process turns on the pain points and the decision-maker, and this (good!) advice is how you make sure your demo moves you ahead on both. Go Language - Web Application Secure Coding Practices -- The main goal of this book is to help developers avoid common mistakes, while at the same time learning a new programming language through a "hands-on approach." This book provides a good level of detail on "how to do it securely," showing what kind of security problems could arise during development. (via Binni Shah) Continue reading Four short links: 27 June 2017.
Planet IntertwinglySVP and General Counsel
When you shop online, you want to find the products you’re looking for quickly and easily. And merchants want to promote those same products. That's why Google shows shopping ads, connecting our users with thousands of advertisers, large and small, in ways that are useful for both.
Hacker News$5 Steel Bowl from Ikea Spontaneously Ignites Contents
Comments
Planet IntertwinglyPossible future directions for data on the Web
As I enter my final days as a member of the W3C Team*, I’d like to record some brief notes for what I see as possible future directions in the areas in which I’ve been most closely involved, particularly since taking on the ‘data brief’ 4 years ago. Foundations The Data on the Web Best Practices, which became a Recommendation in January this year, forms the foundation. As I highlighted at the time, it sets out the steps anyone should take when sharing data on the Web, whether openly or not, encouraging the sharing of actual information, not just information about where a dataset can be downloaded. A domain-specific extension, the Spatial Data on the Web Best Practices, is now all-but complete. There again, the emphasis is on making data available directly on the Web so that, for example, search engines can make use of it directly and not just point to a landing page from where a dataset can be downloaded – what I call using the Web as a glorified USB stick. Spatial Data That specialized best practice document is just one output from the Spatial Data on the Web WG in which we have collaborated with our sister standards body, the Open Geospatial Consortium, to create joint standards. Plans are being laid for a long term continuation of that relationship which has exciting possibilities in VR/AR, Web of Things, Building Information Models, Earth Observations, and a best practices document looking at statistical data. Research Data Another area in which I very much hope W3C will work closely with others is in research data: life sciences, astronomy, oceanography, geology, crystallography and many more ‘ologies.’ Supported by the VRE4EIC project, the Dataset Exchange WG was born largely from this area and is leading to exciting conversations with organizations including the Research Data Alliance, CODATA, and even the UN. This is in addition to, not a replacement for, the interests of governments in the sharing of data. Both communities are strongly represented in the DXWG that will, if it fulfills its charter, make big improvements in interoperability across different domains and communities. Linked Data The Gartner Hype Cycle. CC: BY-SA Jeremykemp at English Wikipedia The use of Linked Data continues to grow; if we accept the Gartner Hype Cycle as a model then I believe that, following the Trough of Disillusionment, we are well onto the Slope of Enlightenment. I see it used particularly in environmental and life sciences, government master data and cultural heritage. That is, it’s used extensively as a means of sharing and consuming data across departments and disciplines. However, it would be silly to suggest that the majority of Web Developers are building their applications on SPARQL endpoints. Furthermore, it is true that if you make a full SPARQL endpoint available openly, then it’s relatively easy to write a query that will be so computationally expensive as to bring the system down. That’s why the BBC, OpenPHACTS and others don’t make their SPARQL endpoints publicly available. Would you make your SQL interface openly available? Instead, they provide a simple API that runs straightforward queries in the background that a developer never sees. In the case of the BBC, even their API is not public, but it powers a lot of the content on their Web site. The upside of this approach is that through those APIs it’s easy to access high value, integrated data as developer-friendly JSON objects that are readily dealt with. From a publisher’s point of view, the API is more stable and reliable. The irritating downside is that people don’t see and therefore don’t recognize the Linked Data infrastructure behind the API allowing the continued questioning of the value of the technology. Semantic Web, AI and Machine Learning The main Semantic Web specs were updated at the beginning of 2014 and there are no plans to review the core RDF and OWL specs any time soon. However, that doesn’t mean that there aren’t still things to do. One spec that might get an update soon is JSON-LD. The relevant Community Group has continued to develop the spec since it was formally published as a Rec and would now like to put those new specs through Rec Track. Meanwhile, the Shapes Constraint Language. SHACL, has been through something of a difficult journey but is now at Proposed Rec, attracting significant interest and implementation. But, what I hear from the community is that the most pressing ‘next thing’ for the Semantic Web should be what I call ‘annotated triples.’ RDF is pretty bad at describing and reflecting change: someone changes job, a concert ticket is no longer valid, the global average temperature is now y not x and so on. Furthermore, not all ‘facts’ are asserted with equal confidence. Natural Language Processing, for example, might recognize a ‘fact’ within a text with only 75% certainty. It’s perfectly possible to express these now using Named Graphs, however, in talks I’ve done recently where I’ve mentioned this, including to the team behind Amazon’s Alexa, there has been strong support for the idea of a syntax that would allow each tuple to be extended with ‘validFrom’, validTo and ‘probability’. Other possible annotations might relate to privacy, provenance and more. Such annotations may be semantically equivalent to creating and annotating a named graph, and RDF 1.1 goes a long way in this direction, but I’ve received a good deal of anecdotal evidence that a simple syntax might be a lot easier to process. This is very relevant to areas like AI, deep learning and statistical analysis. These sorts of topics were discussed at ESWC recently and I very much hope that there will be a W3C workshop on it next year, perhaps leading to a new WG. A project proposal was submitted to the European Commission recently that would support this, and others interested in the topic should get in touch. Other possible future work in the Semantic Web includes a common vocabulary for sharing the results of data analysis, natural language processing etc. The Natural Language Interchange Format, for example, could readily be put through Rec Track. Vocabularies and schema.org Common vocabularies, maintained by the communities they serve, are an essential part of interoperability. Whether it’s researchers, governments or businesses, better and easier maintenance of vocabularies and a more uniform approach to sharing mappings, crosswalks and linksets, must be a priority. Internally at least, we have recognized for years that W3C needs to be better at this. What’s not so widely known is that we can do a lot now. Community Groups are a great way to get a bunch of people together and work on your new schema and, if you want it, you can even have a www.w3.org/ns namespace (either directly or via a redirect). Again, subject to an EU project proposal being funded, there should be money available to improve our tooling in this regard. W3C will continue to support the development of schema.org which is transforming the amount of structured data embedded within Web pages. If you want to develop an extension for schema.org, a Community Group and a discussion on public-vocabs@w3.org is the place to start. Summary To summarize, my personal priorities for W3C in relation to data are: Continue and deepen the relationship with OGC for better interoperability between the Web and geospatial information systems. Develop a similarly deep relationship with the research data community. Explore the notion of annotating RDF triples for context, such as temporal and probabilistic factors. Be better at supporting vocabulary development and their agile maintenance. Continue to promote the Linked Data/Semantic Web approach to data integration that can sit behind high value and robust JSON-returning APIs. I’ll be watching … As of 1 July I’ll be at GS1, working on improving the retail world’s use of the Web. Keep in touch via my personal website and @philarcher1.
Planet IntertwinglyThriving in a post-browser world
How can web professionals succeed in a world where the browser is declining in relevance? (Hint: Specialize.)Continue reading Thriving in a post-browser world.
Planet IntertwinglyWelcome our Experts to your Team!
Looking for an affordable way to deal with the skills gap? Presenting IBM Collaboration Solutions Ask The Experts Offering!  
Hacker NewsEuropean Commission fines Google €2.42B
Comments
Planet Intertwingly机器学习即规范分析
我在机器学习上犯了一个错误。 而且重复犯了很多次。  我在口头和书面上声称机器学习和预测分析几乎是相同的。  具体点讲,我的观点很简单:分析可分为 4 个类别,如下图所示(参见分析格局了解细节) 我在这个 2D 格局中将机器学习放在靠近预测分析的地方: 当然,我还将优化视为所有分析技术中最重要的部分,因为它会带来最高的业务价值。对于一位研究优化近 30 年的人,您还希望从他那里得到什么?难怪这个观点会在优化社区中流行起来…… 如果这个观点很流行,那么为什么它是错的呢? 首先,我要打消读者对我的心理健康的质疑:我仍认为优化最适用于计算最佳决策。问题出在别的地方。 在遇到客户愿意使用机器学习解决他们存在的所有业务问题时,我开始认为存在一个问题。我不能责怪他们,如今有太多围绕机器学习的流言,人们很容易认为机器学习能解决存在的所有问题。我不能责怪他们,但在看到人们愿意使用机器学习计算最佳决策时,我感到有点震惊。我的心理模式是,机器学习应该用于预测,优化应该用于根据这些预测来优化决策: 机器学习适用于预测,优化适用于决策。 为了阐述清楚,我们举一个例子。我们可以使用机器学习技术来构建一个模型,预测一家零售连锁店的未来需求(未来销量)。然后我们可以使用优化来计算这些连锁店的最佳库存管理方法,确保将库存成本和缺货风险控制到最小。  简单而又强大,对吧?  然而,我不断遇到一些将机器学习视为直接获知良好决策的一种方法的人。这值得我深思。他们真的错了吗?  毕竟他们可能没有错。为了知道原因,让我们看看一些事实。 机器学习的第一个著名示例是 Arthur Samuel 开发的一个程序,该程序学习了如何掌握比他更高的西洋棋棋艺。Samuel 利用机器学习功能来学习如何下棋,即如何制定正确决策。还有其他一些利用机器学习功能来学习如何制定正确决策的示例。例如,IBM Watson 几年前就在 Jeopardy 上打败了最优秀的人类。IBM Watson 是一台学习机器(在 IBM,我们喜欢称之为认知 机器)。它吸收了 Wikipedia 的内容,并使用来自 Jeopardy 的成对出现的问题答案进行训练。在训练过程中,它回答 Jeopardy 问题的性能在不断提高,直到打败了最优秀的人类玩家。IBM Watson 学习了如何回答问题。它学会了如何为它从未见过的问题确定哪个答案是最佳答案。 最近,Google AlphaGo 在与一位顶级围棋棋手对弈的比赛中取得了胜利。AlphaGo 也是一种学习机器。它首先使用所记录的顶级玩家之间的大量围棋比赛数据进行训练。然后自行训练。在训练过程中,它的围棋棋艺不断提升,直到超过顶级人类玩家。AlphaGo 学会了如何为任何围棋棋盘布局确定最佳的下一步棋招。 分析这些示例后,我们不能再说机器学习仅适用于预测。机器学习可用于计算决策。 这是否意味着机器学习将取代优化?我们稍后再回答这个问题。 在直接使用优化时,有人(通常是运营研究人员,他们实际上是最优秀的数据科学家)为要解决的业务问题创建一个优化模型。例如,这个人会建模所有库存限制(可用空间、每件存货的成本、运输成本等)以及业务目标(库存成本与由于缺货而导致的销量损失风险的组合)。然后,定期将该模型与一个特定的业务问题实例相结合,得到一个优化问题(例如,上周剩余的库存和这周的预计需求)。然后通过一个优化算法解决该问题,得出一个结果(例如一个补货计划)。这可使用下图进行描绘。 使用机器学习时,我们首先拥有(大量)要解决的业务问题实例,以及它们的已知答案(是的,这就是监督式学习)。然后我们使用这些示例训练一个模型,目的是让此模型能计算新问题的最佳解决方案。然后,将该模型应用于新实例来计算它的答案。关键在于,训练阶段是一个优化问题。它相当于找到一个在给定训练示例上具有最佳预测能力的模型;要了解更多信息,请参阅下面的附录。因此,我们得到了这个机器学习工作流: 这幅图与上一幅图很难比较,对吧?在选择优化或机器学习来计算最佳决策时,我们实际上选择的是何时 使用优化。无论如何 都会使用优化。 读者可以思考一下这一点。 最后让我们来回答一个问题:为什么我们采用一种方式来使用优化,而不采用另一种方式?我们是否应使用过去的好决策和坏决策示例来训练机器学习模型?或者我们是否应对每个新实例单独应用优化?我还没有明确的统一答案,欢迎读者提供建议。我的直觉是,正确答案可能是一种组合。 附录:机器学习是一个优化问题 机器学习即优化的观点不是我提出的。例如,在我编写机器学习与优化时,我发现 John Mount 给出了很好的说明: 在我看来,最佳的机器学习工作是一种将预测重新表述为一个优化问题的尝试(参见示例:Bennett, K. P. 和 Parrado-Hernandez, E. (2006)。优化与机器学习研究的相互作用。机器学习研究期刊,7,1265–1281)。优秀的机器学习文章使用优秀的优化技术,欠佳的机器学习文章(事实上大部分都是)使用欠佳且过时的临时优化技术。 我在编写机器学习是什么?时也发现了这种将机器学习定义为优化问题的观点。             本文翻译自Machine Learning As Prescriptive Analytics      
Planet PostgreSQLPavel Stehule: replace_empty_string
Last half of year I am working on migration relative big application from Oracle to Postgres. This application is based on intensive usage of stored procedures, triggers, views. The base tools are ora2pg and plpgsql_check. Many thanks to Gilles Darold for his work on ora2pg. Half year ago this tool has almost zero support for PL/SQL - and now it is able to translate 90% of big code base of old PL/SQL code to PLpgSQL. There was lot of issues, but often was fixed to next day. Thank you.

Some tools I had to write too. I have some points for Orafce. Last tool what I wrote for this project is replace_empty_string extension. Oracle doesn't save empty strings - it does translation to NULL implicitly. To ensure similar behave I wrote generic trigger, that any empty string replaces by NULL. Default is quite behave, but warning (when string is empty string) is possible.

Example:
CREATE EXTENSION replace_empty_string;

CREATE TABLE res (
id int4,
idesc text,
test1 varchar,
test2 text
);

CREATE TRIGGER res_replace_empty_string
BEFORE UPDATE OR INSERT ON res
FOR EACH ROW
EXECUTE PROCEDURE replace_empty_string ();

INSERT INTO res VALUES (1, 'first', NULL, '');
INSERT INTO res VALUES (2, NULL, '', 'Hello');

\pset null ****

SELECT * FROM res;
id | idesc | test1 | test2
----+-------+-------+-------
1 | first | **** | ****
2 | **** | **** | Hello
(2 rows)

UPDATE res SET idesc = ''
WHERE id = 1;

SELECT * FROM res;
id | idesc | test1 | test2
----+-------+-------+-------
2 | **** | **** | Hello
1 | **** | **** | ****
(2 rows)
Hacker NewsAuthor of cURL denied entry to the USA
Comments
Hacker NewsHMS Queen Elizabeth is 'running outdated Windows XP', raising cyber attack fears
Comments
Hacker NewsCheat to Win: Learn React with Copywork
Comments
Planet Intertwingly从零开始用 CICS Liberty 部署 Java 应用(系列连载)
连载一:使用 CICS 提供的 TSQ 案例,在 CICS Liberty 上完成简单的 Java 程序部署 适宜读者:(一年及以上主机经验) 背景知识:(CICS基础知识)   关于 CICS 对Java支持的背景知识,前面已经有两篇博文详细进行了介绍。一篇是 CICS对Java的支持,另一篇是 CICS 和 Liberty 的那些事(An introduction to CICS Liberty)。感兴趣的读者可以移步前两篇详阅。至此,关于 CICS Java 的基础介绍告一段落。接下来,为了使读者对 CICS Java 有更进一步的了解,我们将使用一系列文章连载的方式,手把手教大家如何在具体应用场景中实践 CICS Java。   本文作为连载的第一篇,首先将介绍环境准备,案例获取,应用部署等基本操作。目标是期望即使对于 Java 零基础的读者,也能轻松搭建起运行在 CICS 上的 Java 应用。   环境准备 环境要求: CICS版本为CICS TS 5.1及以上版本 用户有权限访问USS(Unix System Services)目录   工具安装:Eclipse,CICS Explorer,IBM Explorer for z/OS Eclipse是Java的开发环境,首先需要下载Eclipse环境:http://www.eclipse.org/downloads/eclipse-packages/ 该网址包含最新版本Eclipse下载包,推荐用户下载最新的 Eclipse IDE for Java EE Developers。如图1,页面右上角可以选择运行平台。 图1:Eclipse下载页面   下载完成后,启动 Eclipse,选择文件目录作为Eclipse的工作空间,用来存储开发的项目文件,如图2。 图2: Eclipse启动页面 接下来需要安装所需的Eclipse插件:CICS explorer以及IBM Explorer for z/OS。 如图3所示,选择安装新软件。添加网址作为软件下载来源:http://public.dhe.ibm.com/ibmdl/export/pub/software/htp/zos/tools/aqua3.1/ 图3:安装新软件     图4:选择安装软件 如图4,选择需要安装的软件。接下来按照默认选项,完成安装。(注意:关于不同版本eclipse和z/OS Explorer的安装详情请参考: https://developer.ibm.com/mainframe/products/downloads/eclipse-tools/)   到这里实验环境已经准备完成,可以开始真正的实验了。   建立CICS Liberty JVM server 首先需要在使用的 CICS region 启动作业中添加 SDFJAUTH 到 STEPLIB。然后修改 SIT 参数添加 USSHOME,JVMPROFILEDIR,例如:USSHOME= /usr/lpp/cicsts/cics700;JVMPROFILEDIR=/u/user1/jvmprofiles   CICS region 启动以后,我们接下来需要准备JVM profile。JVM profile是JVM server的配置文件,必须是EBCDIC编码,内容包含环境变量以及JVM server所需要的配置信息。JVM profile储存在USS目录下,需要用户创建。对于新用户,第一次创建 JVM profile 的时候为了方便可以拷贝系统提供的实例文件。一般实例文件存在于 cics 安装目录,例如:/usr/lpp/cicsts/cics700/JVMProfiles。将JVM profile 拷贝到用户自己的USS目录下,然后进行编辑。在本文的例子中,用户需要注意的 JVM profile 参数有如下几个:   JAVA_HOME=/usr/lpp/java/J8.0_64 (Java安装目录,用户根据实际系统参数设置) WORK_DIR=/u/user1/output (Liberty工作目录,用于存储CICS Liberty用户文件) WLP_INSTALL_DIR=&USSHOME;/wlp (CICS Liberty安装目录,在CICS安装目录之下,这个参数建议使用sample提供的默认值,不建议修改) -Dcom.ibm.cics.jvmserver.wlp.autoconfigure=true (设置为true以后,当Liberty JVM server启动时,CICS会使用JVM profile里面的相关参数自动创建或配置Liberty profile server.xml) -Dcom.ibm.cics.jvmserver.wlp.server.http.port=9980 (http请求端口号,默认为9080,用户自己指定) -Dcom.ibm.cics.jvmserver.wlp.server.https.port=8843(https请求端口号,默认为9443,用户自己指定)     在CICS region中定义 JVM server 资源并安装: OVERTYPE TO MODIFY                   CEDA  DEFine JVmserver( WLPJVM   )   JVmserver    ==> WLPJVM             Group        ==> MYGRP              DEScription  ==>                    Status       ==> Enabled            Jvmprofile   ==> DFHWLP             Lerunopts    ==> DFHAXRO            Threadlimit  ==> 015                如果安装成功,JVM server 应该处于 enable 状态: STATUS:  RESULTS - OVERTYPE TO MODIFY              Jvm(WLPJVM  ) Ena     Prf(DFHWLP  ) Ler(DFHAXRO )     Threadc(001) Threadl( 015 ) Cur(26956064)       同时,可以通过访问Liberty服务器首页确认 Liberty 是否启动成功。URL为 http://youCicsHost:portnumber,如图5所示。 图5:Liberty启动成功   这时候如果查看 WORK_DIR=/u/user1/output 路径,可以看到输出文件夹 /u/user1/output/APPLID 已经被创建。在这个文件夹之下存储着所安装的 JVM server 所使用的相关文件,比如 log,config,STDERR,STDOUT 等。当 JVM server 成功启动以后,用户可以从 message.log 文件中看到相关消息,例如:A CWWKF0011I: The server defaultServer is ready to run a smarter planet.   图6展示的是输出文件夹结构实例。该文件夹位于 WORK_DIR/APPLID 图6:文件夹结构实例 如图6所示,Liberty server 的配置文件 server.xml 可以在该目录中找到。后续如果需要修改 server.xml,用户可以使用 z/OS explorer 远程系统管理透视图进行操作,如图7所示。   图7:使用z/OS explorer修改server.xml文件   导入CICS提供的Liberty示例应用并配置运行   如果你已经看到这里了,恭喜你完成了本任务的大半工作,给后续实验奠定了良好基础。现在有了环境,有了 Liberty 服务器,我们可以在上面跑一些 Liberty 应用了。   其实,CICS 产品为了方便用户使用,已经提供了一些很好的示例应用。这里我们就将使用其中的‘CICS Temporary Storage Queue (TSQ)’示例。   首先,在 z/OS explorer 中创建新的示例项目,并选择 TSQ 示例,如图8,图9所示。使用默认配置完成项目创建。 图8:创建新的示例项目 图9:选择TSQ示例 创建好项目之后会在project explorer中看到项目包,如图10所示。 图10:项目包示意图   现在我们看到项目中有一些错误,为解决这些错误我们需要配置Target Platform以包含所需要的包。打开 Eclipse > Preference > Target Platform,点击添加,如图11。 图11:添加Target Platform 如图12,选择‘CICS TS V5.3 with Liberty and PHP’并默认下一步完成。选择该新建的 Target 定义为活跃 Target Platform。之前项目中的错误就这样解决了。 图12:选择‘CICS TS V5.3 with Liberty and PHP’   接下来我们需要把应用部署到 CICS 环境中。 部署 Liberty 应用我们有两种方式,一种方式是直接把应用放到dropins文件夹内(dropins 文件夹与 server.xml 位于同一文件夹内),Liberty JVM 服务器会自动安装该应用;一种方式是通过 CICS bundle资源的方式安装。我们推荐后者,因为bundle方式由CICS对资源进行管理,方便CICS资源维护和版本管理。因此,在本实验中,我们需要部署的CICS bundle 资源为:com.ibm.cics.server.examples.wlp.tsq.bundle   在部署bundle资源之前,打开文件 com.ibm.cics.server.examples.wlp.tsq.ebabundle,修改 jvmserver 名字为我们已经安装的 jvm 并保存:     修改完应用就可以开始部署了,按照 bundle 部署的一般方式,将 bundle 文件 export 到 USS 目录,如图13所示。 图13:export bundle文件到USS目录   在CICS region中定义bundle资源并安装: CEDA  DEFine Bundle( TSQSAMP  )                                              Bundle         : TSQSAMP                                                    Group          : MYGRP                                                      DEScription  ==>                                                            Status       ==> Enabled            Enabled | Disabled                      BUndledir    ==> /yourBundleDir/com.ibm.cics.server.examples.wlp.tsq.bundle  (Mixed Case) ==> _1.0.1                                               安装成功bundle应该处于enable状态: STATUS:  RESULTS - OVERTYPE TO MODIFY                     Bun(TSQSAMP ) Ena         Par(00001) Tar(00001)            Enabledc(00001) Bundlei(com.ibm.cics.server.exampl)   在CICS启动作业的日志中能看到以下信息: +CWWKZ0001I: Application com.ibm.cics.server.examples.wlp.tsq.app  149  started in 0.853 seconds.                                              最后让我们来试一下安装的应用是否可以正常运行。打开浏览器访问 URL:http://yourCicsHost:yourPort/com.ibm.cics.server.examples.wlp.tsq/ 图14:Liberty应用启动正常   如果你能看到 图14 的界面,恭喜你成功的搭建了 Liberty 服务器并且部署成功 Liberty 应用。   让我们回顾一下,本文从介绍如何安装 Eclipse,z/OS explorer 开始,到启动 Liberty 服务器,安装部署 Liberty 应用,给大家展示了最基本的 CICS Liberty 支持。在后续的连载文章中,我们将基于这个 TSQ 的示例应用,进一步介绍更多 CICS Liberty 功能支持,例如安全控制等。期待感兴趣的读者持续关注我们的连载文章。   作者:王成芳 邮箱:wangcfbjATcn.ibm.com (替换AT为@) 内容声明:本文仅代表作者的个人观点,与IBM立场、策略和观点无关。文中专业名词因翻译原因,表述中难免存在差异。如有疑惑,请以英文为准。同时数据来源于实验室环境,仅供参考。如果您对我们的话题感兴趣,请通过电子邮箱联系我们。  
Planet IntertwinglyUnderstanding Industrial Computer Systems
The computer systems used in industrial environments are different from regular computers in many ways – they look different, their components are different, they perform different functions and they need to meet different requirements in terms of durability and ruggedness.   Hardware and Peripheries   The special demands of the environments that industrial computers work in call for special hardware solutions:   Industrial computers use specialized enclosures tailored specifically for the site of usage. The enclosures vary in terms of the materials they are made from and also in terms of mounting systems. Some of these special solutions make the computers suitable for being used indoors, others allowing them to work reliably outdoors.   Displays – Strangely, not all industrial computers have displays attached to them, but the ones that do use monitors that look and work in a way completely different from office computer screens. Some industrial displays are very simple, and in many cases they don’t even need to be color screens because they only display parameters expressed in letters and numbers. While displays for other applications are large and able to display complex graphics.  All industrial displays are rugged, resistant to grime, dust, dirt, humidity and extreme temperatures. There is a special type of industrial computer called panel PCs that integrates a monitor, in some cases a touch panel into the same structure that accommodates the motherboard.   The keyboard attached to industrial computers also needs to be suitable for harsh circumstances such as the conditions in production areas or special environments such as sterile laboratories. Some keyboards are made from stainless steel, others are washable.   The mouse – another important component of industrial computer systems, the special mouse used for these unique applications are usually made from silicone rubber and use special contact to guarantee reliability.   Internal components – the parts used for industrial computers are usually rugged and feature special designs. Many applications use alternative cooling solutions such as liquid cooling, others are equipped with fanless cooling systems and many industrial constructions incorporate additional cooling systems that use air filters.   Controls are also designed to suit the requirements of the harsh environment they are used in and are usually more robust than the buttons on regular computers.   Most industrial computers are attached to robust uninterruptible power supplies that supply the unit with sufficient electric current to complete the processes it is engaged in and to shut down properly, without getting damaged and without damaging the machines it is controlling if a power outage occurs.   Software   Industrial computers are PCs, but they use different software and perform different functions than regular consumer computers. In many instances, the programs that run on these PCs are custom-written to perform special operations such as the command and control of a CNC machine. The software also incorporates special security features that protect its integrity. Many industrial computers are equipped with a watchdog timer to detect malfunctions and to recover the system afterwards.   Longevity   Industrial computers are special in terms of lifecycle as well. The life expectancy of standard computers is only 2-3 years, but many industrial applications are able to function for as much as a decade without fail. The reason for this extraordinary longevity is the special components, the durable enclosure and the special, very stable software applications that run on these units.     Manufacturers Industrial computers are produced by specialized manufacturers like Chassis Plans that have the production line and processes to meet the needs of this specialized environment. Because industrial systems are so often customized for the specific use case, industrial computer manufacturers typically do very small production runs with many more quality checks than general purpose computer companies.
Planet IntertwinglyTake Your Hunting To A Whole New Level With These Useful Apps
Most people think of hunting as a fairly low-tech experience. In fact, however, technology can add a lot to your hunting adventures. When used correctly, apps and other modern tools can help you stay safe and can improve your chances of having a successful hunt. The apps that are listed below can be a great addition to your hunting arsenal. * Accuweather -- Available for Android and iOS Although there are a lot of good weather apps out there, AccuWeather is one of the best. This popular app provides a wealth of information about the weather in your immediate vicinity. You can find everything from the temperature to the wind speed and direction. The way the wind is blowing is particularly important when you are hunting whitetail deer so that you don't inadvertently give away your position with your scent. * Hunting & Fishing Times by iSolunar -- Available for iOS If you want to gather information on the best times and places for hunting in your area, look no further than the Hunting & Fishing app by iSolunar. With this app, you can discover information such as what time the sun is going to come up or what time it is going to set. You can also get detailed information on the peak feeding times for animals in your area. *  Google Earth -- Available for Android and iOS If you want a convenient way to get the lay of the land before you head out, Google Earth is the perfect solution. By allowing you to view satellite imagery of the areas where you are going to go hunting, you can easily identify the best locations for hunting. You can also avoid obstacles that could slow you down. * Deer Tactics & Calls -- Available for Android and iOS When you hear a deer call in the wild, it can be hard to identify it or to understand what it means. This app teaches you about 12 common sounds that deer make. By learning how to better understand the sounds that you hear when you are out in the field, you can ultimately become a better hunter. After all, the key to success with hunting lies in intimately knowing the habits of your prey. If you’re looking for something on predator calls for hunting, then try there. * Sunrise Sunset -- Available for Android and iOS There are laws regarding how light it has to be outside before you can hunt. Using the Sunrise Sunset app, you can be sure that you aren't hunting too early or too late in the day. The last thing that you want is to inadvertently wind up on the wrong side of the law. * SAS Survival Guide -- Available for Android and iOS The SAS Survival Guide has been one of the best-selling survival books for years. Now, it has been converted into an app so that you can access its vital information while you are on the go. It provides essential guidance and training on how to survive in extremely difficult situations. Hopefully, you will never have to use it. If you do find yourself in a sticky situation, however, you will be glad that you have it. * Shot Simulator -- Available for Android and iOS As a hunter, it is important to have a thorough understanding of the anatomy of your prey. That way, you can take shots that result in a clean kill every time. The Shot Simulator app allows you to shoot digital arrows at a 3-D model of a whitetail deer so that you can see what organs are hit in each part of the body. The app also provides details such as how long it should take for the deer to die with each type of shot. * Antler Geeks -- Available for Android and iOS If you love reading about hunting, Antler Geeks is a fantastic resource. It is packed with articles all about hunting, providing entertaining and informative reading material that can keep you occupied for hours. The best part is, it is made by people who are actually hunters - not just writers. That way, you know that the information in the magazine will be useful and informative. * Realtree Forums -- Available for iOS The Realtree Forums app allows you to connect with other hunters. This can be a good way to get a sense of community since it allows you to talk with other people who are as interested in hunting as you are. * Realtree Archery Challenge -- Available for Android and iOS Have fun practicing your archery skills through your phone with this challenging app. This digital target practice can test your skills. It also makes it easy to share your results through social media.
Planet IntertwinglyWorkaround to Launch IBM MQ Explorer after Uninstalling MQ Fixpack on Windows Platform
  Summary: This blog is for the workaround when user tries to launch the MQ Explorer on the back level MQ Fix pack, i.e. when user installed latest MQ Fixpack ex: MQ 7508 Fixpack and uninstalled 7508 FP and when tries to launch MQ explorer on back level Fix pack 7507, might end up with error while launching the explorer using command   Error you could see as below while launching MQ explorer                   And in the *.log -
Hacker NewsVertical AI Startups: Solving Industry-Specific Problems
Comments
Hacker NewsPostgreSQL 10 Beta 1 Released
Comments
Hacker NewsPanorama (YC S13) in Boston – Sr. Software Engineer – Make a Change in Education
Comments
Hacker NewsYour.MD raises $10M to grow AI-driven health information service and marketplace
Comments
Planet IntertwinglyHow do I integrate Logstash with Amazon's Elasticsearch Service (ES)?
Learn the somewhat quirky process for integrating Logstash with the Amazon Elasticsearch Service.Continue reading How do I integrate Logstash with Amazon's Elasticsearch Service (ES)?.
Planet IntertwinglyHow do I connect to Kibana from Amazon's Elasticsearch Service (ES)?
Explore techniques that allow specific IP address/proxy server access to Kibana, protect your ES cluster, and block entry by unauthorized users.Continue reading How do I connect to Kibana from Amazon's Elasticsearch Service (ES)?.
Planet IntertwinglyHow do I configure access policies within Amazon's Elasticsearch Service (ES)?
Learn to configure the access policies crucial to working successfully with the Amazon Elasticsearch service.Continue reading How do I configure access policies within Amazon's Elasticsearch Service (ES)?.
Hacker NewsThe Hackers Russia-Proofing Germany’s Elections
Comments
Planet IntertwinglyA new form of watching SMSz & SMU videos- Video Table of Contents
To improve users' experience in watching videos, the Video Table of Contents arises. Generally speaking, video table of contents provides the users with another way of watching videos. As a result, we have made one for SMSz & SMU videos. Please find more details on https://zmanage.github.io/SMSz-SMU-video-TOC/.   Comparing with the ways we use before to collect videos (i.e., listing all the videos on one wiki page, or uploading videos onto YouTube channel), video table of contents shows the following advantages: Watch the series of videos on only one web page. Arrange videos in different hierarchies. Provide additional functions, such as search & filter videos, bookmark videos, etc.   SMSz & SMU video table of contents is also added to the SMSz wiki page. You can find more details on https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Wfb8610d29f30_4f81_802f_2b8d115202ec/page/Videos.   Welcome to add any comments towards this new form. We really appreciate your feedback! Picture 1: SMSz & SMU Video Table of Contents capture
Hacker NewsVolkswagen partners with Nvidia to expand its use of AI beyond autonomous vehicl
Comments
Hacker NewsHigh school dropout who invested in Bitcoin at $12 is now a millionaire at 18
Comments
Planet IntertwinglyIBM Control Desk 7.6.0.2 fix pack has been released on 21 Mar 2017
The new IBM Control Desk 7.6.0.2 fix pack has been released on 21 Mar 2017. If you recall, it has been 1 year since the previous ICD 7.6.0.1 fixpack being released on 19 Feb 2016. It runs on Tivoli process automation engine 7.6.0.6 with IFIX004 (Tpae 7.6.0.6-IFIX20170308-0945) and the same IBM Maximo for Service Providers 7.6. In addition to that, it will upgrade IBM Control Desk for Service Providers to 7.6.0.2. Some of key highlights include :- 1. Enhancement to the Workflow Designer - The Workflow Designer no longer requires an applet and now works on the latest versions of most browsers. 2. Product Help is directed to the IBM Knowledge Center - When you click Help, you are taken directly to the IBM Knowledge Center. Eclipse-based, local Help is no longer supported. 3. Enhancements to the Change Schedule application - The Change Schedule application now has a graphical user interface where the applet has been replaced by HTML-js. ... and many more In IBM Control Desk 7.6.0.2, you will also get the Service Portal installer for Windows platform *Previously, the Service Portal installer is only available for Linux platform in IBM Control Desk 7.6.0.1   For more details on what is new in IBM Control Desk 7.6.0.2, you can find it here What is new in IBM Control Desk Fix Pack 7.6.0.2 For complete details on downloading IBM Control Desk 7.6.0.1 you can find it here Downloading IBM Control Desk Fix Pack 7.6.0.2   As you aware of, IBM Control Desk 7.6 uses the new installer IBM Installation Manager. If you haven't had a chance to apply this new fix pack to your ICD 7.6 environment, let's go through the fix pack installation process with IBM Installation Manager as the step should be more straight forward. And also, it's good to get a picture of overall process too. In this blog, I will go through the step to apply ICD 7.6.0.2 fix pack to IBM Control Desk for Managed Service Providers running on Windows Server 2012. Just to refresh, we have following editions :- - IBM Control Desk - IBM Control Desk for Service Providers - IBM Control Desk for Managed Service Providers   1. Make sure you have gone thorough the download documentation in Downloading IBM Control Desk Fix Pack 7.6.0.2. As you can see, you will need to download part 1 and part 2 which are required. You will also need to download the part 3 Tpae 7.6.0.6 IFIX. Other parts e.g. Optional Content, Service Portal and IBM Node js are optional. If you use ITIC, then I would say it is required but this is going to be upgraded separately. I'll be applying fix pack to IBM Control Desk for Managed Service Providers 7.6 running on Windows Server 2012 64bit, therefore I will download the following :- - icd_7.6.0_part1_spm.zip (1.7 GB) - icd_7.6.0_part2_win64.zip (582.52 MB) - TPAE_7606_IFIX.20170308-0945.im.zip (31.97 MB) - 7.6.0.2-TIV-ICD-FP0002_en.README (23.9 KB) - Always keep the readme file of the fix pack. You will need this from time to time. - icd_optional_content_7.6.0.zip (14.84 MB) - This is optional but I include in this upgrade process.   2. Perform the "Pre-Installation Tasks" which you can find in 7.6.0.2-TIV-ICD-FP0002_en.README file.   3. Then, I create "ICD7602" folder so I can extract the contents of both zip files (icd_7.6.0_part1_spm.zip & icd_7.6.0_part2_win64.zip) to this same folder. I also create "TPAE_7606_FIX" folder and extract TPAE_7606_IFIX.20170308-0945.im.zip into the folder.   4. From Windows start menu, find "IBM Installation Manager" and click to launch the application.   5. In IBM Installation Manager's file menu, File -> Preferences   7. Click 'Add Repository' and select repository.config files in "ICD7602" folder and in "TPAE_7606_FIX". Then, I select icd_optional_content_7.6.0.zip file itself and add to repository.   8. In IBM Installation Manager windows, click Update.   10. In Update Packages panel, select IBM Tivoli's process automation suite and click Next.   11. In Update Packages panel, select the Version 7.6.0.2 update and click Next. - If you do not get this screen but you get error e.g. "No updates or fixes were found for the packages that are installed in the selected locations", most probably you download/extracted the wrong part1 of the fix pack files. - There are 3 editions where you will only need to download/extract one of it according to your installation; i. IBM Control Desk -or- ii. IBM Control Desk for Service Providers -or- iii. IBM Control Desk for Managed Service Providers. If you are not sure, drop a comment below with screen shot. 12. In Update Packages panel, select the feature IBM Control Desk for Service Providers 7.6.0.2 and click Next.   13. In Update Packages panel, review Summary and click Update.   14. Wait until the packages are updated successfully and click Finish to close the panel. Then, close IBM Installation Manager window.   15. From Windows start menu, find Tivoli's process automation suite's "Configuration Program" and click to launch the application.   16. Click "Update Database and Build and Deploy Application EAR Files".   17. Confirm update summary and click Next.   18. Select applicable deployment operations and click Finish.   19. Once this is completed, you can see the system is upgraded IBM Control Desk 7.6.0.2. - This environment has all languages installed and it took around 8 hours for whole operation to complete.   20. Example of full listing of System Information after the system is upgraded to IBM Control Desk 7.6.0.2 Rational Team Concert and Rational ClearQuest Integration for IBM Control Desk 7.6.0.2075 Build BUILD DB Build V7511-00 Configuration items for IBM Control Desk for Service Providers 7.6.0.2075 Build 201703161751 DB Build V7511-00 Service Desk Classification Content for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7511-00 Quick Configuration for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7511-00 Service Desk for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7602-01 Asset Management for IBM Control Desk for Service Providers 7.6.0.2075 Build 201703161812 DB Build V7511-00 Common process components for IBM Control Desk for Service Providers 7.6.0.2075 Build 201703161751 DB Build V7511-00 IBM TPAE Integration Framework 7.6.0.6 Build 20161014-1504 DB Build V7606-24 Service Desk Integration MEA for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7530-00 Configuration Items CMS LIC for IBM Control Desk 7.6.0.0202 Build 201507210000 DB Build V7511-00 Service Desk Everyplace for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7510-03 Release management for IBM Control Desk for Service Providers 7.6.0.2075 Build 201703161751 DB Build V7511-00 Release management for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7602-01 Service Desk for IBM Control Desk for Service Providers 7.6.0.2075 Build 201703161751 DB Build V7511-00 IBM Maximo Asset Management Work Centers 7.6.0.1 Build 20161014-1326 DB Build V7601-41 Service Request Management for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7601-01 Data Integration and Context Menu Service Configuration 7.5.0.0 Build 20090911D2 DB Build V7117-07 User interface widgets for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7602-01 Configuration management content for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7511-00 TPM Integration Module for IBM Control Desk 7.6.0.0202 Build 201507210000 DB Build V7510-01 IBM Tivoli Integration Composer for IBM Control Desk 7.6.0.2075 Build 201703161812 DB Build V7560-01 Tivoli Remote Diagnostics for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7511-00 Survey Management for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7511-01 IBM Maximo for Service Providers 7.6.0.0 Build 20141125-1930 DB Build V7600-05 Screen Capturer for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7530-00 Entry Edition Content Best Practices for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7540-01 Configuration items content for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7511-00 Asset Management for IBM Control Desk 7.6.0.2075 Build 201703161812 DB Build V7601-03 Instant Messaging Integration for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7200-02 Release management content for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7511-00 Incident Management for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7601-01 Service Catalog for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7602-04 Self Service Center for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7520-01 SLA Hold for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7530-01 Tivoli's process automation engine 7.6.0.6-IFIX20170308-0945 Build 20161019-0100 DB Build V7606-50 HFDB Build HF7606-03 Asset Management Best Practices for IBM Control Desk 7.6.0.2075 Build 7.6.0.2075 DB Build V7600-01 Service Portal for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7601-21 IBM Endpoint Manager Integration Enablement 7.6.0.2075 Build 201703161751 DB Build V7602-02 Configuration management for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7530-02 Service Catalog Content for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7530-01 SmartCloud Provisioning and SmartCloud Orchestration Integration 7.6.0.2075 Build 201703161751 DB Build V7511-04 Solution for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7602-01 Advanced Edition Content Best Practices for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7511-00 Change management for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7602-03 Common process components for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7602-01 Service Desk Best Practices Content for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7520-01 IBM Endpoint Manager Integration Configuration 7.6.0.2075 Build 201703161751 DB Build V7602-01 Search for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7530-01 Configuration items for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7602-01 Instant Messaging for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7530-00 Advanced Workflow Components for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7600-01 Common process components content for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7511-00 OSLC Support for IBM Control Desk 7.6.0.2075 Build 201703161812M DB Build V750-06 Computer Telephony Interface for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7503-01 Configuration management for IBM Control Desk for Service Providers 7.6.0.2075 Build 201703161751 DB Build V7511-00 Service Desk Best Practice Users for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7511-00 Problem Management for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7601-01 Live Chat for IBM Control Desk 7.6.0.2075 Build 201703161751 DB Build V7602-01 Change management for IBM Control Desk for Service Providers 7.6.0.2075 Build 201703161751 DB Build V7511-00    
Hacker NewsThis Windows Defender bug was so gaping its PoC exploit had to be encrypted
Comments
Hacker NewsSeattle's minimum wage may actually cost restaurant workers
Comments
Hacker NewsVolvo and Autoliv aim to sell self-driving cars with Nvidia AI tech by 2021
Comments
Hacker NewsJavaScript Fatigue or My History with Web Development
Comments
Hacker NewsCURL author Daniel Stenberg denied to board flight to US
Comments
roguelike developmentRoguelikeDev Does The Complete Python Tutorial - Week 2 - Part 1: Graphics and Part 2: The Object and the Map

This week we will cover parts 1 and 2 of the Complete Roguelike Tutorial.

Part 1: Graphics

Start your game right away by setting up the screen, printing the stereotypical @ character and moving it around with the arrow keys.

and

Part 2: The object and the map

This introduces two new concepts: the generic object system that will be the basis for the whole game, and a general map object that you'll use to hold your dungeon.

Bonus

If you have extra time or want a challenge this week's bonus section is Using Graphical Tiles.


FAQ Friday posts that relate to this week's material:

#3: The Game Loop(revisited)

#4: World Architecture(revisited)

Feel free to work out any problems, brainstorm ideas, share progress and and as usual enjoy tangential chatting.

If you're looking for last week's post The entire series is archived on the wiki. :)

submitted by /u/aaron_ds
[link] [comments]
Hacker NewsA New Coat of Paint Is Rocking the Global Shipping Industry
Comments
Hacker NewsPersistIQ (YC S14) Is Hiring a Sr. Back End Engineer in San Mateo
Comments
Les CrisesL’élimination d’Abou Bakr Al Baghdadi signe l’éradication complète du cercle de Tall Affar, le noyau Turkmène fondateur de Daech (par René Bana)

Source : Madaniya, René Naba, 22-06-2017

L’élimination d’Abou Bakr Al Baghdadi par un raid russe sur Raqqa, Nord de la Syrie, le 25 Mai 2017, si elle était confirmée, signerait l’éradication complète du cercle de Tall ‘Affar, le noyau turkmène fondateur de l’État Islamique.

Pivot de Daech, ultime survivant du Cercle de Tall’Affar et du camp de Bucca, sud de l’Irak, la disparition du Calife Ibrahim revêtirait certes une lourde signification par sa portée symbolique. Mais ce désastre ne saurait remettre en cause le projet de restauration du Califat islamique, en dépit des défaites majeures subies par l’organisation djihadiste, ni de ses lourdes pertes en vies humaines, en dépit de sa politique suicidaire menée à l’encontre des minorités sous son emprise, les Chrétiens et les Yazédis qu’il s’est aliéné par les persécutions qu’il leur a fait subir, plutôt que de les amadouer.

La chute probable de Mossoul, selon toute vraisemblance, va relancer les tensions inter communautaires, exacerbées par une hyper fragmentation de la société irakienne du fait de quinze ans de guerres intestines. Elle pourrait inciter Daech à compenser la perte territoriale de son califat en terre arabe par un plus fort ciblage européen. Une «branche européenne» de Daech aurait d’ailleurs été créée à cet effet, constituée de près de 5.000 volontaires auparavant engagés dans les combats en Syrie et en Irak.

Lire la suite

Hacker NewsFunctional programming in JavaScript is an antipattern
Comments
Les CrisesSyrie: le Quai d’Orsay commence à reconnaître ses erreurs d’analyse, par Georges Malbrunot

Chiffre intéressant – et donc probablement sous-évalué…

Source : Blog le Figaro, Georges Malbrunot, 11-01-2017

« On a eu tort de personnaliser le débat en Syrie autour de Bachar el-Assad », a confié en public un haut-dirigeant du ministère des Affaires étrangères. « Il faut reconnaître que Assad dispose encore de soutien populaire, peut-être aux alentours de 30% des Syriens, les minorités en particulier », a récemment ajouté le diplomate.

Ces propos contrastent avec les éléments de langage abondamment relayés en haut-lieu à Paris sur le conflit syrien depuis six ans, qu’il s’agisse du « départ prochain du dictateur » ou « de la révolte d’un peuple contre Assad ». Autant de prévisions qui ne se sont pas réalisées, regrettent tous les opposants syriens.

En privé, de plus en plus de diplomates font observer que le diagnostic posé sur la crise syrienne dès ses débuts était loin d’être partagé par tous les fonctionnaires ayant une connaissance du dossier syrien. Mais “nous n’avions pas voix au chapître”, se souvient l’un d’entre-eux. Rien d’étonnant donc qu’aujourd’hui, les “réalistes” commencent à sortir du bois. Nul ne remet en question l’objectif de trouver une alternative à Bachar el-Assad. Ils critiquaient seulement l’absence de “plan B” de notre diplomatie qui avait tout misé sur le renversement d’Assad.

Lire la suite

QC RSSStrong Opinions




Ads by Project Wonderful! Your ad could be here, right now.

Hacker NewsHow HTTPS Handshake Happens
Comments
Hacker NewsSony open-sources NNabla – a simple, fast and lightweight NN library
Comments
Ars TechnicaReport: Valve’s former augmented reality system is no more

CastAR's first prototype. Subsequent revisions brought the glasses' size down and fidelity up, so that its mounted projectors would better convey the feeling that virtual objects appeared on a mat (also known as "augmented reality" or "mixed reality"). However, the project's future is now in doubt. (credit: CastAR)

The future of CastAR, an ambitious augmented reality system that began life in Valve's hardware labs five years ago, is now in serious doubt. A bleak Monday Tweet from a former CastAR staffer was followed by Polygon's Brian Crecente reporting a full company shutdown.

Citing unnamed "former employees," Polygon reported that the hardware maker's primary finance group pulled all funding last week. This was allegedly followed by a full staff layoff and an announcement that the company's remaining assets would be liquidated.

As of press time, neither CastAR nor its affiliated developer, Eat Sleep Play, have posted any confirmation of shut downs or liquidation. Ars Technica has reached out to CastAR co-founders Jeri Ellsworth and Rick Johnson. We will update this report with any response.

Read 3 remaining paragraphs | Comments

Hacker NewsSpaceVim is the best IDE for people live in terminal
Comments
Hacker NewsWhy I utterly loathe the new app switcher in iOS 11
Comments
Ars Technica“McMansion Hell” used Zillow photos to mock bad design—Zillow may sue

(credit: McMansionHell)

An architecture blogger has temporarily disabled her website, McMansionHell.com, after receiving a demand letter from Zillow and posting it on Twitter.

On Monday, Zillow threatened to sue Kate Wagner, saying that that she was violating its terms of use, copyright law, and possibly the Computer Fraud and Abuse Act because she took images from the company's website without permission. However, on each of her posts, she acknowledged that the images came from Zillow and were posted under the fair use doctrine, as she was providing (often humorous) commentary on various architectural styles. Her website was featured on the design podcast 99% Invisible in October 2016.

Confusingly, Zillow does not even own the images in question. Instead, Zillow licenses them from the rights holders. As such, it remains unclear why the company would have standing to bring a lawsuit against Wagner.

Read 7 remaining paragraphs | Comments

Hacker NewsHush (YC S16) Is Hiring a Senior Back End Developer
Comments
Ars TechnicaThis Windows Defender bug was so gaping its PoC exploit had to be encrypted

(credit: Microsoft)

Microsoft recently patched a critical vulnerability in its ubiquitous built-in antivirus engine. The vulnerability could have allowed attackers to execute malicious code by luring users to a booby-trapped website or attaching a booby-trapped file to an e-mail or instant message.

A targeted user who had real-time protection turned on wasn't required to click on the booby-trapped file or take any other action other than visit the malicious website or receive the malicious e-mail or instant message. Even when real-time protection was off, malicious files would be executed shortly after a scheduled scan started. The ease was the result of the vulnerable x86 emulator not being protected by a security sandbox and being remotely accessible to attackers by design. That's according to Tavis Ormandy, the Google Project Zero researcher who discovered the vulnerability and explained it in a report published Friday.

Ormandy said he identified the flaw almost immediately after developing a fuzzer for the Windows Defender component. Fuzzing is a software testing technique that locates bugs by subjecting an application to corrupted data and other types of malformed or otherwise unexpected input.

Read 6 remaining paragraphs | Comments

Planet IntertwinglyAppScale: Portable App Engine and Google Tries Again
We just signed a new client called AppScale. Founded by Woody Rollins, who was also a co-founder at Eucalyptus, the company has a parallel playbook. Where Eucalyptus tried to establish itself as a play for building Amazon Web Services (AWS) compatible private clouds, AppScale is an open source implementation of Google’s App Engine PaaS. It is based on App Engine APIs and supports Python, Go, PHP and Java applications. AppScale, launched back in 2011, has some high scale customers, and does well where customers are convinced of the App Engine model, but need, say fedRAMP compliance or to run in mainland China, or to take advantage of significant discounting by Microsoft Azure. The core AppScale idea is interesting. However while from an engineering perspective Google App Engine (GAE) was generally quite well regarded, Google had trouble building a business around it. Pricing changes and customer uncertainty were both issues. Both Google and Microsoft initially bet PaaS and platform services would win, and Infrastructure would be quickly superseded. Unfortunately for both companies they were wrong, at least in a multi-year time frame. Enterprises were wary of proprietary PaaS technology and AWS casually crushed it with infrastructure services, before gradually adding platform specific services. Lock in was a more gradual process. So what about App Engine futures? Google has resources to burn, a deep commitment to the cloud business, and reportedly recently hired an executive I greatly respect – Oren Teich, ex Heroku and Canvas, to run GAE. Meanwhile PaaS is clearly still a thing, even while Docker and Kubernetes are taking up some much of the air supply. Red Hat did 3 fundamental rewrites of OpenShift in 5 years, before settling on Kubernetes as a base, and hitting a market sweet spot. Pivotal continues to make progress with Cloud Foundry. IBM is retooling Bluemix around Kubernetes. These markets are by no means “done” yet. In tech, like fashion, timing is everything. Google looks to be recommitting to App Engine.   AppScale, Cloud Foundry Foundation, IBM, Red Hat, Google, Pivotal, and Microsoft are all clients.    
Les CrisesLes États-Unis subissent la plus grosse fuite de données d’électeurs de l’histoire, par Alexis Orsini

Source : Numerama, Alexis Orsini, 20-06-2017

Plus de 25 terabytes de données sensibles (adresse, orientation politique…) concernant 198 millions de citoyens américains étaient accessibles sur un serveur en ligne non sécurisé. En cause : une société d’analyse de données recrutée par le Parti républicain pendant la campagne présidentielle pour mieux cibler les électeurs potentiels de Donald Trump.

Âge, sexe, adresse, numéro de téléphone, couleur de peau, affiliation politique et position personnelle sur des sujets sensibles comme l’avortement ou les armes… Toutes ces informations concernant 198 millions d’électeurs américains potentiels étaient stockées sur un cloud Amazon, sans aucune protection, libres d’être téléchargées par les internautes qui en connaissaient — ou en découvraient — l’adresse. Soit plus d’un terabyte de données accessible sans mot de passe.

Celles-ci ont été mises en ligne par l’entreprise d’analyse de données Deep Root Analytics, recrutée pour 983 000 dollars par le Parti républicain pendant la campagne présidentielle de 2016 pour identifier des citoyens potentiellement enclins à voter pour leur candidat, afin de mieux les cibler à l’aide de publicités ciblées. Après que le chercheur en sécurité Chris Vickery a révélé l’existence de ces gigantesques bases de données en accès libre, le 12 juin dernier, Deep Root a reconnu son erreur et sécurisé les fichiers : «  Nous assumons pleinement la responsabilité de cette situation. »

Une situation inédite par son ampleur puisqu’il s’agit de la plus grande fuite de données d’électeurs potentiels de l’histoire des États-Unis — et du monde –, comme le confirme Chris Vickery : « En termes d’espace disque utilisé, c’est la plus grande exposition à laquelle j’ai eu affaire. Il en va de même pour sa portée. » Celle-ci concerne en effet plus de la moitié de la population américaine.

Lire la suite

Hacker NewsSpoilerwall: Avoid being scanned by spoiling movies on all your ports
Comments
roguelike developmentApocatastasis - a seven day rogue-like

I have been working for the last 5 days on a rogue-like that is intended to be finished on Wednesday. The entire game has been written from the ground up, engine and all, in 5 days. The only external code used is keyboard management, which is ported from my WIP 2d game engine.

Me and /u/dillyo09 are now looking for some help balancing the combat, we are looking for some people to help with play testing.

To get the game you have two choices, you can download and play the version here, or you can download the source from github and compile the project yourself.

Now that you have the game how do you give us feedback? The best way is through the discord chat here.

Controls.

WASD - movement.

E - use Tome.

12345 - Equip item in slot X

Ctrl + 12345 - Sell item in slot X

B - Buy a random item for 100g

Space - Activate nearby stairs or loot

submitted by /u/nullandkale
[link] [comments]
Planet IntertwinglyMore for June 2017
Quick and easy event creation As you analyze Sessions, you replay a session to see where user's are having some difficulty.  You notice that a few users are getting stuck on the same item.  Now, you can easily create an event or event asset from the Raw Data. 1.  Go to the Raw data view of the Consolidated UI. 2.  Select the text from which you want to create an event 3.  Right click and select either Create SA (session attribute) or Create HA (hit attribute). The Event manager opens and the data you selected pre-populates the form. Now you can use the event in a report.   Need to quickly delete a metric in a report? Sometimes when you are reviewing reports, you find that you have added a single metric that you no-longer need.  Previously, if you wanted to delete a single metric you had to use Bulk Delete, find the metric, select it, and delete it.   We just made that easier! To delete a single metric, simply hover over the Metric in the side panel and click the delete icon (X).   Performance improvements You may notice some performance improvements with this release.  We worked hard to make this product work faster for you!  
Planet IntertwinglyWhat's new in June 2017
Cognitive Struggle Analytics: Sorting out the false positives As you review the struggle analytics data for one of your applications, you realize that the some pages have high amounts of struggle.  This may be the result of how the user interacts with the page, rather than an indication of a real struggle.  You decide to address this by eliminating the struggle factor detection for those pages. Here's how to do that: 1. From the Struggle Analytics menu, select Edit Struggle Factor.     2. Click the Edit icon of the struggle factor you want to reconfigure.     3. Locate the pages that you want to exclude from struggle detection and deselect the check box for those pages.     4 Click Save.  The Step count struggle factor will not be calculated for the pages you deselected.   Quick and easy event creation in the new Consolidated UI As you analyze Sessions, you replay a sessions to see where user's are having some difficulty. You notice that a few new users are getting stuck on the same item.  Now you can easily create an event or event asset from the Raw Data.   1. Go to the Raw data view of the Consolidated UI. 2. Select the text from which you want to create an event. 3. Right-click and select either Create SA (session attribute) or Create HA (hit attribute).  The Event manager opens and the data you selected pre-populates the form.   Now you can use the event in a report.   If you are not using the Consolidated UI, you need to have the feature enabled.  Go here and read how to make that happen.   Video like replay Watch your user's experience in a more smooth way with our new video replay.  You can see how your users interact with your websites in a more seamless manner.  Watch your sessions come to life when you click the play icon at the bottom of your session replay display.  This new feature will be improving each release, so feel free to send your feed back!  
Planet IntertwinglyWhat's new for organization administrators in June 2017
If you are a system administrator or are assigned the role of OrgAdmin, you can now do the following: Modify reports, workspaces, and events that were created by users assigned to RegularUser role Delete reports from user's workspaces Schedule workspace reports for users Share and unshare user's workspaces Delete or modify any CX overstat snapshot or snapshot group in the gallery
Hacker NewsQuantum Computing: A Beginner’s Notes and Overview of IBM's Quantum Experience
Comments
Hacker NewsWhen Simple Wins: Power of 2 Load Balancing
Comments
Dedefensadedefensa.org a besoin de votre solidarité

dedefensa.org a besoin de votre solidarité

Alors que nous écrivons ce message, ce 27 juin 2017, la barre de donation de dedefensa.org pour le mois de juin 2017 atteint €1.343. Nous tenons à remercier très sincèrement et chaleureusement ceux de nos lecteurs qui, répondant à nos premiers appels, sont intervenus dans cette donation.

Bien sûr, cette somme de €1.343 couvrant les 26 premiers jours du mois de juin est encore bien éloignée du montant qui nous est nécessaire pour continuer à fonctionner normalement. (Phrases sempiternelle, néanmoins rajeunies en fonction des nécessités...   « “… les montants de €2.000 et €3.000, [...] constituent pour nous les sommes permettant respectivement un fonctionnement minimum des fonctions essentielles du site et un fonctionnement plus aisé de ces fonctions”. Nos lecteurs savent évidemment que, depuis 2011, les conditions économiques ont évolué et que les sommes proposées doivent être définies différemment. Le seuil du “fonctionnement minimum des fonctions essentielles du site” dépasse aujourd’hui très largement les €2.000 et se trouve quasiment au niveau des €3.000 avec le reste à l’avenant... »)

Vous avez pu lire dans de nombreux messages, y compris dans ceux qui sont référencés dans le texte de présentation de la barre de comptage, les nombreux arguments que nous présentons pour justifier notre appel à votre soutien et à votre solidarité, – qui  concernent aussi bien notre site que la presse antiSystème en général, avec le combat essentiel que nous menons tous. En novembre 2016, Justin Raimondo avait trouvé une formule très juste pour présenter la demande régulière de fonds de soutien de son site Antiwar.com, que nous pourrions prendre pour nous, – même combat, mêmes nécessités: « L’avenir est brillant, – et aussi, potentiellement très sombre. C’est le paradoxe où nous vivons dans le temps présent. Ainsi devons-nous nous préparer à ces deux éventualités, –avec votre aide… »

Ici, enfin et pour conclure, nous nous contentons de renouveler la demande faite à nos lecteurs d’intervenir et de faire en sorte que ce mois de juin 2017 se place dans la dynamique et la logique des mois et années précédents où le soutien mensuel à notre site a toujours rencontré notre attente, et toujours selon ce même mouvement de mobilisation dans les derniers jours du mois.

 

Mis en ligne le 27 juin 2017 à 09H27

Hacker NewsDelaney Introduces Bill to End Gerrymandering, Reform Elections
Comments
Planet IntertwinglyWhat's new in IBM Knowledge Center for z/TPF PUT 14
The z/TPF space in IBM Knowledge Center is refreshed with new content for PUT 14. Documentation that is available for PUT 14 includes the following enhancements. Click each link to go to an introductory page for that enhancement. Access enhancement for fomat-2 globals Automatic ECB owner name restore CDC support for services statistics Common deployment restart enhancement DASD prefix CCW support Data collection and reduction support for services statistics ECB heap trace display File system enhancements for PUT 14 Greater than 64 KB read support REST provider support SHA-256 support for z/TPF digital certificates User exits for MongoDB requests and other enhancements ZDTCP command enhancement ZIFIL command enhancement z/TPFDF default key support for z/TPF support for MongoDB z/TPF support for Java z/TPF WebSphere MQ trace enhancement What's new in PUT 14 for z/TPFDF z/TPFDF optimized B+ Tree add operations   NOTE: New content might not be fully indexed yet in IBM Knowledge Center. Click on the name of each enhancement to link directly to a topic for the enhancement.  
Ars TechnicaPotential jurors call Shkreli evil, snake—one blamed him for EpiPen price

Enlarge / BROOKLYN, NY - Monday, June 26, 2017: Martin Shkreli arrives at Brooklyn Federal Court on the first day of his securities fraud trial. (credit: Getty | Kevin Hagen)

Martin Shkreli appeared in a New York federal court Monday for the start of his securities fraud trial—and was quickly declared guilty of price gouging by potential jurors.

Shkreli is facing eight counts of securities and wire fraud in connection to an alleged Ponzi-like scheme involving one of his old pharmaceutical companies, Retrophin. But the ex-CEO is infamous for something completely different: raising the price of a life-saving medication given to infants and people with HIV/AIDS by more than 5,000 percent overnight as CEO and founder of Turing Pharmaceuticals. Outrage over that unrelated move spilled into the courtroom today and stands to slow progress of the fraud trial.

In interviews with Judge Kiyo Matsumoto, potential jurors called Shkreli “evil,” and “the face of corporate greed in America,” CNBC reports. One potential juror said: “he’s a snake.” Another admitted: “I have total disdain for the man." One potential juror blamed Shkreli for the skyrocketing price of EpiPens, which are made by Mylan, a pharmaceutical company that has no connection with Shkreli.

Read 4 remaining paragraphs | Comments

Ars TechnicaIt’s time to teach people online self-defense

Enlarge / Would you get your Internet from this van? (credit: Bobotech - Know Your Meme)

We've learned something from the investigation into whether Russia meddled in the US election that has nothing to do with politics. Humans are more vulnerable than ever to propaganda, and we have no clue what to do about it.

Social media as weapon

A new report in The Washington Post reveals that the Obama administration and intelligence community knew about Russian attempts to disrupt the 2016 election months in advance. But they did virtually nothing, mostly because they didn't anticipate attacks from weaponized memes and propaganda bots.

Former deputy national security adviser Ben Rhodes told the Post that the members of the intelligence community focused on more traditional digital threats like network penetration. They wanted to prevent e-mail leaks, and they also worried about Russian operatives messing with voting machines. "In many ways... we dealt with this as a cyberthreat and focused on protecting our infrastructure," he said. "Meanwhile, the Russians were playing this much bigger game, which included elements like released hacked materials, political propaganda, and propagating fake news, which they'd pursued in other countries."

Read 15 remaining paragraphs | Comments

Hacker NewsHelp kids learn to code at CodeCombat (YC W14) – hiring software engineers
Comments
jwzColo, again
Or, "Old Man Yells at Cloud Hosting, Part 2".

It seems that I have three options, in This Modern World:

  1. Virtual server.

    Many options; the big ones are Amazon, Digital Ocean, Google. They are probably all about the same. Price is somewhere between $450 and $800/month, maybe?

    Pro:
    • Everyone does it this way.
    • When it is Upgrade Season, spinning up a new instance for rebuild/migration is easy.
    • I will never have to think about disk, RAM or power supplies going bad.
    Con:
    • Expensive.
    • The way I would be using it would be to have a single instance. Nobody does it that way, so it probably doesn't work very well.
    • I need 2TB of file system storage. Nobody does it that way, so it's expensive.
    • Figuring out exactly which of their many options is the configuration that I need is really difficult.
    • Whatever IP address they give me is probably already on every email spam blacklist in the world.

  2. Dedicated server.

    I'm seeing numbers anywhere from $100/month to $500/month. It's all over the map, which does not inspire confidence.

    Pro:
    • It's a real damned computer, with predicable behavior.
    • When disk, RAM or power supplies go bad, someone else fixes it.
    • I never need to physically visit it.
    Con:
    • It is hard to tell whether the companies that offer this service will still be in business two years from now.
    • It's hard to tell whether they are real companies, or "one flaky guy".
    • Spinning up a new instance in Upgrade Season is somewhat more involved, and maybe costs me a couple hundred bucks.
    • Though it can be located anywhere, since all of my customers are in San Francisco, it probably should be on the West Coast. That narrows the already narrow field of options.
    • People keep recommending companies that are not hosted in the country I live in. This strikes me as extremely foolish for several reasons.

  3. Bare rack slot, with my own home-built 1U computer in it.

    Probably something like $100/month, plus the cost of the computer (say, $1000, will last 4 years).

    Pro:
    • It's a real damned computer, with predicable behavior.
    • Cheap.
    Con:
    • Hardware failures are my problem.
    • Spinning up a new instance in Upgrade Season is a huge pain in the ass.
    • The data center has to be local, because I probably need to go physically visit it every year or two.

Figuring this out is such a pain in the butt. I really want to believe that option 1 is the way to go, but I'd need to get the price down (without first needing to completely re-design the way I do absolutely everything, thanks), and it just sounds like it's going to be flaky.

Options 2 and 3 sound flaky in their own ways. Pro: I already understand those ways. Con: one of those ways is why I'm looking to move in the first place.

Hacker NewsFoundry Group's zero tolerance policy on sexual harassment
Comments
Hacker NewsApple’s AR is closer to reality than Google’s
Comments
Ars TechnicaTurkey decides evolution is too “controversial” to teach to high schoolers

Enlarge / A young Charles Darwin, before evolution had caused any public controversy. (credit: National Library of Medicine)

In the US, opponents of evolution have tried to undercut instruction on the topic by suggesting schools should "teach the controversy." The national education authorities in Turkey, however, have decided that teachers should avoid any hint of controversy in the classroom. In service of that goal, the country is pulling evolution out of its high school curriculum entirely. The change will be implemented during the upcoming school year, 2017-2018.

In Turkey, the curriculum for state-run schools is set by the national government. The move against education in biology came as the state education authorities were undertaking a review of the national curriculum. Reports indicate that the review largely resulted in an emphasis on religious themes and Turkish culture and history, at the expense of information on Mustafa Kemal Atatürk and his role in the founding of the modern Turkish state.

But science got caught up in the process somehow. According to the head of the national board of education, Alpaslan Durmus, the problem is that Turkish students aren't given the necessary scientific background to separate the theory from the controversy that it has generated in some communities:

Read 2 remaining paragraphs | Comments

Hacker NewsCold war bomb warmed by chickens
Comments
Hacker NewsThe Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity
Comments
Hacker NewsZillow forces McMansion Hell to delete posts
Comments
Hacker NewsShow HN: Interactive map for architecting big data pipelines
Comments
Hacker NewsiOS 11 turns your iPad into a completely different machine
Comments
Hacker NewsPICO-8 Lighting by hand #1: the thin dark line
Comments
Ars TechnicaRegulators suggest $7.5 billion coal gasifier facility give up, burn natural gas

Enlarge / Cranes stand at the construction site for Southern Co.’s Kemper County power plant near Meridian, Mississippi, U.S., on Tuesday, Feb. 25, 2014. Photographer: Gary Tramontina/Bloomberg via Getty Images (credit: Getty Images)

A coal gasification plant in development in Mississippi is more than $4 billion over budget and years past deadline—and now it may have to rethink plans to burn gasified coal in favor of cheaper natural gas after a recommendation from state regulators.

The recommendation was made to prevent potential rate increases as the Kemper County plant continues to face cost overruns. Kemper was supposed to be up and running by 2014, for less than $3 billion. But the plant has now run up a $7.5 billion tab and may need redesigns on a critical part, a process that could take up to two years to complete, according to E&E News. No official decision has been made yet, but the Mississippi Public Service Commission made it clear last week that burning cheaper natural gas instead of gasified coal may be a long-term solution for the facility.

Kemper already burns natural gas at its facility, but Southern Company, which owns Kemper, has poured billions into building “transport integrated gasification” (TRIG) technology. TRIG converts lignite coal into synthesis gas using a two-round process to convert a higher percentage of lignite into gas at a low temperature. Syngas made from lignite coal burns cleaner than burning the pulverized coal itself, and, with the addition of a carbon capture unit, Kemper expects to reduce greenhouse gas and particulate pollution by 65 percent. The syngas production process for lignite coal was developed by Southern with the help of the Department of Energy at the National Carbon Capture Center in Wilsonville, Alabama.

Read 6 remaining paragraphs | Comments

Planet IntertwinglyDFDL parser for JSON (PJ44767)
Even though DFDL support has provided the ability to create JSON from binary data for several years, the process used an internal tree structure known as infonodes.  The building of this internal tree before building of the JSON document provided some additional overhead that could be avoided.  DFDL support has been updated to create a new API (tpf_dfdl_buildDoc) that does not use infonodes in the process of creating the JSON document.  This new API makes it not only faster, but even simpler to use.   This update also reduced the transformation overhead in all the areas in which DFDL is being used.  For business events, a new format of "JSON" has been provided that makes use of the new API to build JSON directly from the event data.  For REST provider support, if the response involves JSON, the new DFDL API is used to build the response data in the HTTP response body without any changes needed.  For Java support, the tpf_srvcInvoke API will make use of the new DFDL API to build the HTTP request body from the request data without any changes needed.  Lastly, for z/TPF support for MongoDB, the DFDL metadata changes included with this update allows for a slightly reduced overhead in transforming the data to BSON for the MongoDB requests.
Hacker NewsDeep Learning in Robotics
Comments
Hacker NewsApple Acquires German Eye Tracking Firm SensoMotoric Instruments
Comments
Hacker NewsIdentifying Sources of Cost Disease
Comments
Ars TechnicaSkylake, Kaby Lake chips have a crash bug with hyperthreading enabled

Enlarge / A Kaby Lake desktop CPU, not that you can tell the difference in a press shot. (credit: Intel)

Under certain conditions, systems with Skylake or Kaby Lake processors can crash due to a bug that occurs when hyperthreading is enabled. Intel has fixed the bug in a microcode update, but until and unless you install the update, the recommendation is that hyperthreading be disabled in the system firmware.

All Skylake and Kaby Lake processors appear to be affected, with one exception. While the brand-new Skylake-X chips still contain the flaw, their Kaby Lake X counterparts are listed by Intel as being fixed and unaffected.

Systems with the bad hardware will need the microcode fix. The fix appears to have been published back in May, but, as is common with such fixes, there was little to no fanfare around the release. The nature of the flaw and the fact that it has been addressed only came to light this weekend courtesy of a notification from the Debian Linux distribution. This lack of publicity is in spite of all the bug reports pointing to the issue—albeit weird, hard-to-pin-down bug reports, with code that doesn't crash every single time.

Read 6 remaining paragraphs | Comments

Hacker NewsWhat is an Ethereum token?
Comments
Hacker NewsInfarm wants to put a farm in every grocery store
Comments
Hacker NewsMinebase: a free data mining tool for social networks
Comments
Hacker News70% Repetition in Style Sheets: Data on How We Fail at CSS Optimization
Comments
Hacker News95-Degree Days: How Extreme Heat Could Spread Across the World
Comments
Planet IntertwinglyVP of Engineering
Code Lab for Cozmo is based on Scratch Blocks, making it the first toy built for kids with the platform.
Planet IntertwinglyVP of Engineering
Code Lab for Cozmo is based on Scratch Blocks, making it the first toy built for kids with the platform.
Hacker NewsOver 150 of the Best Machine Learning, NLP, and Python Tutorials I’ve Found
Comments
Ars TechnicaJudge grants ex-NFL cheerleader’s request to delete dozens of online articles

Enlarge (credit: Arizona Cardinals)

In 2013, Megan Welter had a really bad night.

Welter, at that time a cheerleader for the Arizona Cardinals, got into a drunken fight with her boyfriend. It ended with her calling 911 and reporting him for domestic violence. Welter's boyfriend was a professional fighter, who "smashed [her] head into the tile" and put her in a "choke hold with his legs," she told the 911 dispatcher.

When the police showed up, they found out that wasn't true. Welter's boyfriend, Ryan McMahon, showed video on his cell phone verifying that it was Welter who had attacked him. She was arrested and charged with assault.

Read 18 remaining paragraphs | Comments

Game WisdomWhy We Need to Talk about Failure in the Game Industry

Why We Need to Talk about Failure in the Game Industry Josh Bycer josh@game-wisdom.com

The Game Industry is one of the most popular markets in the world and has attracted people from all over. The allure of being able to play games professionally or be able to make them is a powerful motivator. However, the industry and the media as a whole is not doing a complete job, and are presenting a very dangerous viewpoint of an industry without failure.

MinecraftExaminer 560x200 Why We Need to Talk about Failure in the Game Industry

The Success Stories:

The Game Industry is full of success stories from the Indie to the AAA. We’ve seen the likes of Minecraft, Five Nights at Freddy’s, Mass Effect and Call of Duty.  These games are now massive parts of pop culture.

The passion seen in the Game Industry is a constant attraction of people from all walks to make games for a living. Reading stories on game sites about the success of these games, why wouldn’t you want to make games for a living?

One big success can easily carry a studio from one game to the next, or earn enough money to then work on whatever you want. Throw in the accessibility of game engines today, and you have a market just waiting for the next big hit.

Unfortunately, for every success story reported, there are countless examples of studios failing.

What’s Success?

Success is very hard to see from the outside looking in. Some measure success in terms of copies sold, or winning awards. The truth of the matter is that success is based on one important fact: Did this game earn enough money to keep the lights on?

Accolades and critical reception don’t always equal a best seller. The more time and money put into a game, the harder it is to earn a profit.

Five Nights at Freddys 2 300x168 Why We Need to Talk about Failure in the Game Industry

It’s impossible to predict what games will see success in the Indie space

Designers will say that just getting a game finished is moving forward, but that sets a dangerous precedent.

The Truth of the Market:

It’s time to piss off a lot of people. If you ask a game developer for advice on how to make games for a living, they will usually answer with the following: “Just make video games.” At this point, that statement is so wrong that it’s criminally negligent to say in my opinion.

The following needs to be turned into a sign, because it’s something every student or would be designer needs to know — Making a Good Video Game is Not Enough.

There is a very big difference between making video games and living off of your games. It is very rare to find an indie studio like Introversion, Grey Alien Games or Positech Games that can say they are still going after a decade plus in the market.

Very little has to go wrong for your studio to be in trouble, and having a great game doesn’t magically erase the risks.

And we’re not even going to talk about all the ways you can doom your game during the development and planning process. Understanding how to succeed in the game industry is just as, if not more so, important than being able to program or do art.

The only times we actually do hear about failure in the mainstream is when it’s too big to ignore. Titles like No Man’s Sky or Mass Effect Andromeda, but due to NDAs, it’s still hard to get a complete enough picture to learn from it.

Learning From Our Mistakes:

Being able to learn from our mistakes is a vital part of life beyond just making games. A very important motto that I hear developers say these days is, “Fail Small.” It’s very easy to be driven by passion to go all in on an experimental game concept, but is it good enough to bet your future on?

I feel we need more post mortems on games so that we can learn about the do’s and don’ts. It’s better to figure out you’re doing something wrong during the prototype stage than three months away from release.

While we can’t prevent failures from happening, we can at least make the greater part of the industry and consumer base educated about the risks of game development.

The post Why We Need to Talk about Failure in the Game Industry appeared first on Game Wisdom.

Planet IntertwinglyThe SSP factory certificate that comes with the product is expiring December 1, 2017 at 10:54 PM EST.
The SSP factory certificate that comes with the product is expiring December 1, 2017 at 10:54 PM EST. This certificate is installed as the default certificate and is used for the secure connections to the CM GUI and between the CM and engine. If customers have not installed their own certificate to replace the factory certificate, then after the expiration date, the CM will no longer be able to communicate with the engine to push configurations and will not be able access the CM GUI securely through a web browser. To determine if the CM and Engine is still using the factory certificate you can run the shell script configureCmSsl.sh or configureCmSsl.bat located in the Secure Proxy CM install bin directory. Here is an example for running the script: sspuser@l1suse1:~/SSP3430tst/sspcm1/bin> ./configureCmSsl.sh -s IBM Sterling Secure Proxy V3.4.3.0 Copyright (c) 2017 IBM Enter the system passphrase:
Ars TechnicaMurder charges for doc who prescribed alleged “horrifyingly excessive” opioids

Enlarge / Pills. (credit: Getty | smartstock)

An Oklahoma doctor is facing five counts of second-degree murder charges following the opioid overdose deaths of her patients.

Prosecutors charged osteopathic physician Regan Ganoung Nichols, 57, on Friday in Oklahoma County District Court. Oklahoma Attorney General Mike Hunter told reporters that Nichols prescribed trusting patients a “horrifyingly excessive” amount of opioid medications. “Nichols’ blatant disregard for the lives of her patients is unconscionable,” he said.

In all, Nichols allegedly prescribed more than 1,800 medically unnecessary opioid pills to the patients who died, according to a probable cause affidavit reported by the Associated Press. Three out of the five patients also received allegedly deadly combinations of painkillers, muscle relaxants, and anti-anxiety medications.

Read 6 remaining paragraphs | Comments