Enter your email. We will reset your password and send you a new one.
Last 5 email alerts sent for docker on Hacker News
It's not very difficult to install Ubuntu in a VM or spin up a cloud based instance. The bigger problem is their packages work on at latest Ubuntu 14.04, which is unsupported since April and I'd guess there will be a lot of incidental bitrot that may make getting it running unpleasant. But none of this matters, looks like there are several Docker based tile servers, which are a relative breeze to set up on any operating system.
by davidy123 2019-12-12 14:12:47 | link | parent | submission
Overkill ? Maybe. But the amount of time saved by not going through "Just having the server side serve the Vue.js app on the main route" is insane. Simplicity is the metric you want to optimize for, because it really reduces opportunities for breakage. Docker adds a few layers, yes, but it also makes deployment, and local launch easier. The only acceptable alternative to Docker, is, IMHO, a makefile.
If I open your project and see a Dockerfile, I'm just going to `docker build && docker run`. If I see a makefile, I'm just going to run `make help ; make run || make`. It greatly reduces the cognitive load if I only want to quickly see what the project does.
by Fiahil 2019-12-12 11:04:02 | link | parent | submission
I sympathize with your problems, I also had to juggle multiple versions for a period of time on multiple projects. Docker thankfully eliminated that and even without it the most you have to do now days is to point "$JAVA_HOME" to another folder. Java and it's ecosystem is better than ever and the language updates since java 8 are great to work with. The JVM is great. Spring boot finally removed most of the the multiple days config issues. Everything is mature, every problem has one or more library to solve it.
by croo 2019-12-12 09:49:34 | link | parent | submission
Last 5 email alerts sent for Kubernetes on Hacker News
Kubernetes and gnu Hurd I believe cannot be compared it’s like saying banana and a cart of apple are same. Kubernetes is some simple container orchestration later more specific to Linux with cgroups and namespace support in kernel. It is done similar by LXD (not as popular as k8s). I think micro kernel architecture did not take off because the device drivers required for inter operating with different devices and chips were not developed for those specific microkernels. Linux benefited from work on BSD and other flavours of Unix besides gnu, gnu Hurd was altogether new and not much industry support. Indeed Linux faces device drivers issues even today. Apple took mach micro kernel and used BSD userland utilities and built a powerful drivers for its device eco-system. The day Hurd ir any other alternative has support it will work. May be if google can built it for Zircon and use it in Fuschia. We need to build an eco-system around the microkernel for gnu if need it to be used widely, until such time it will remain on fringes.
by dragonsh 2019-12-12 03:17:44 | link | parent | submission
Paxos |Software Engineers (All levels) Full-time | Onsite | NYC
Paxos is a post Series-B fintech start up that is focused on digitizing the world's assets and democratizing access to capital. This is the opportunity to be part of a fast-paced, small, and flat organization responsible for developing our exciting cutting-edge products from design to production. Technologies: React, TypeScript, Go, Python, Kotlin, Kubernetes, Terraform, AWS. Apply here - https://www.paxos.com/careers/
by Paxos-NYC 2019-12-11 20:23:41 | link | parent | submission
Remote or in-house.
Full Job Description: https://www.linkedin.com/jobs/cap/view/1420376808/?pathWildcard=1420376808&trk=mcm Stack: Python, Linux, Cloud (AWS or Azure), Docker / Kubernetes.
If interested, email me with your resume: firstname.lastname@example.org
by EnterraAI 2019-12-11 19:11:07 | comments
It came preloaded with a bunch of different versions of software package for various operating systems and architectures. This was for an on premise deploy of Kubernetes where downloading files from outside the cluster was not an option. We were in a rush and this was the best idea anyone had at the time.
by PopeDotNinja 2019-12-11 18:04:44 | link | parent | submission
Last 5 email alerts sent for aws on Hacker News
Disclosure: I work for AWS. We have many tools in our toolbox at our disposal: non-disruptive in-service updates moves live migration from a "must have to operate compute cloud service at all" to "helpful in some scenarios when the workload and/or situation warrants the impact to performance during precopy / potential post-copy phases." But I would not assume that EC2 does not have that particular tool in the "fully production, and used" toolbox.
by _msw_ 2019-12-12 06:57:36 | link | parent | submission
Sorry if I was unclear. In unlimited mode, if you sustain greater than your baseline percentage, you pay for it (the key point of the sentence you’re quoting is that we take on the risk). One reason for this happens to be because AWS doesn’t do migration (yet?), but instead does an awesome job of doing in-place upgrades (see their talks on Nitro, for example).
by boulos 2019-12-12 06:32:37 | link | parent | submission
From what I understand about the E2 instances, the point is that there's often a lot of idle CPU cores compared to the vCPUs that have been allocated by users. That observation in combination with live-migration and larger host machines, it becomes possible to over-sell CPU cores in a way that users won't notice except in tiny ways in around 1 in 100 or 1,000 occasions. That makes great sense. Larger instances mean that you'll be more likely to handle spikes and live-migration means that you can balance people who are actually using the CPU a lot. What's the payoff for me? Paying less! Except that without sustained-use discounts, I'm not paying less. What strikes me as odd is that people getting the sustained-use discount are probably the people you'd want ton E2 instances. If someone is running a web server and ends up leaving you with a lot of idle CPU, that's really good for Google. That means that they're leaving a lot of unused space where you can schedule other VMs. At least to me, it seems like the people most likely to have empty space are the sustained-use people. Who is likely to have the least unused space? The people paying by the second/hour. If I spin up a VM to do a video encoding task and then terminate it, I'm not leaving a lot of empty CPU that can be filled by other VMs. That's why I find it so curious that there's no sustained-use discount for the E2 instances. People leaving relatively idle VMs running in a sustained way seems ideal for this kind of scheduling. Lots of companies are going to have workloads that, well, are less than efficient. For example, a task worker that gets around 6 tasks per hour and takes a minute per task. It's leaving around 90% of the requested CPU idle. I guess the question is: why would anyone running with a sustained-use discount switch from an N1 instance to an E2 instance? The blog article sounds amazing: "we've found lots of CPU you aren't using that we re-use and pass the savings on to you!" Then it seems less fun: yea, you know how you're running a web server that's idle a lot? We'll re-use all that idle CPU, but you won't get a discount. The weird thing is that Google is offering a ~30% discount for everything except sustained-use. You've noted the on-demand discount. A 1-year committed E2 price is 30% off the 1-year committed N1 price. The 3-year commitment is the same. So, it doesn't seem to be the case that Google found that there was a lot of idle CPU in on-demand VMs, but not in sustained-use VMs. It could certainly be that my expectation that long-running VMs would use less average CPU than short-running VMs is wrong, but Google isn't pricing it that way for the 1 and 3-year committed pricing. Given that N2 instances lowered the sustained-use discount from 30% to 20% and the E2 instances have no sustained-use discount, it seems like Google is re-thinking whether it wants to offer sustained-use discounts. That's a pity to me. Sustained-use discounts drew me to Google Cloud over AWS. Google Cloud's offering said to me: "we get that a lot of people are using our VMs in a sustained way for long times and we'll automatically apply a discount for you without requiring you to sit in meetings determining how you want to allocate things." Committed-use discounts were great on top of that, but making sure that people didn't end up paying the on-demand price just because they didn't spend their time pre-allocating capacity was just such a consumer-friendly move and a key pricing differentiator with AWS. It's also a bit odd that it means the committed-use discounts are so much higher over sustained-use. Like, I save ~10% by going with a 1-year commitment on an N1 and ~36% with a 3-year commitment (compared to just leaving them on). So, there isn't a huge benefit to going with a 1-year commitment on N1 instance (not that 10% can't be very beneficial). On the E2s, it's quite big - a ~37% discount for a 1-year commitment and a ~55% discount for 3-years. Frankly, those seem like AWS reserved-instance numbers and mean I'd really want to pre-allocate if I were going with E2 instances. If the E2 instances got a 30% sustained-use discount like the N1 instances, Google could be undercutting AWS by around 50% for that use case. No pre-planning, no commitment, no meetings where people worry about buying something they won't use. Just half-price. It's basically the same savings you'd get if you did a 3-year commitment at AWS (with zero upfront), but without the pre-planning. Instead, it makes me wonder if Google is really committed to the sustained-use discounts or if I'll have to start doing extra planning.
by mdasen 2019-12-12 05:18:47 | link | parent | submission
Breakage due to OS upgrades is one thing. But at least make sure that the software doesn't do anything obviously bad. For example, I was dealing with a "security product" (basically something that profiles activity on the OS looking for malware activity) that we were required to roll out, and come to find out it reliably corrupts the RPM package database on both CentOS 7 and Amazon AWS Linux 2, whenever the OS is rebooted (assuming the software is set to start on boot). Things like this are unacceptable.
by derekp7 2019-12-12 04:00:31 | link | parent | submission
Last 5 email alerts sent for coreos on Hacker News
dnsfilter.com| remote, full-time | https://dnsfilter.breezy.hr/p/9cf550668503-devops-engineer |
Are you looking for a rocket to take a ride on as a DevOps Engineer? If this is you, you might be interested in the opportunity to join DNSFilter! DNSFilter (a TechStars 2018 company) is a fast-growing SaaS startup with over 1700+ customers and is cash-flow positive. We are a proven product in a proven market. Typical responsibilities will include: - Work closely with our CTO
- Perform OS/kernel upgrades on Ubuntu 16.04/18.04 and CoreOS virtual and dedicated instances.
- Maintain Production, Development, Staging, QA environments, including some Windows instances for debugging.
- Document DevOps processes and state - in infrastructure as code, with commits to github where possible.
- Assist QA and developers
- Increase the resiliency of services by developing master/slave and load balanced solutions.
- Further enhance monitoring of servers and services, the performance of services. What we're looking for:
- 3+ Years of DevOps or Linux server administration
- 1+ Years of Experience with Docker Containers and
- 3+ Years of Experience with Linux Get more unique information about your rocket start rocket at DNSFilter !
by DNSFilter 2019-12-02 19:19:50 | link | parent | submission
Technically, there actually weren't that many. More folks were Rackers (ex-Rackspace employees). Edit: I was an early CoreOS employee who was the first "boomerang"
by thebeardisred 2019-12-02 17:45:05 | link | parent | submission
Regardless of the acquiring company's motivation, being acquired is one of the textbook criteria for a successful startup. The other is becoming a large company by themselves, but this is rare. And since Red Hat kept Container Linux/CoreOS open source, this wasn't really a move to eliminate competition. They can support enterprise clients, but the open source nature doesn't stop another company from offering their own enterprise support.
by parsimo2010 2019-12-02 16:39:20 | link | parent | submission
Last 5 email alerts sent for machine learning on Hacker News
> Outside of scientific computation, machine learning, statisticd, etc there is seldom legitimate use for floating point. I don't disagree with the broad strokes but wanted to point out that graphics is another area where it's useful. And lots of mathy stuff with practical applications. Audio could be another.
by asveikau 2019-12-12 04:06:57 | link | parent | submission
by dagss 2019-12-12 03:59:35 | link | parent | submission
There is some material about improved build times on the apple site. I'd say that's aimed directly at programmers. Also some stuff on machine learning, but that could be in the domain of other fields. But yes, the majority of marketing material is aimed at creatives and some researchers for simulations.
by kart23 2019-12-12 02:40:21 | link | parent | submission
(Sure, it's great for lunning Linux, Quake or Alexa in the browser…) But I'll try not ending up sounding facetious. From a code quality and ease of use POV, Blazor might be better, haven't done too much to argue that. And even rebranding the same technology after a few year's of both Moore's law and lessened expectations by users/developers is often a big difference from a marketing standpoint. And that does count, as it increases the community, which is where a lot of projects fail.
by mhd 2019-12-12 07:49:57 | link | parent | submission
by TheDong 2019-12-12 06:31:36 | link | parent | submission
by dragonwriter 2019-12-12 05:35:20 | link | parent | submission
Last 5 email alerts sent for python on Hacker News
Twine is also a package publishing tool in Python. With growing number of projects and tools it is inevitable that we are going to have more and more of these name clashes. I wonder if we can arrive at a naming system for projects to avoid such name clashes to improve the brand identity of these projects.  https://packaging.python.org/tutorials/packaging-projects/#u...
by yori 2019-12-12 11:55:47 | link | parent | submission
My unsolicited advice: Don't worry too much about specific languages. The most valuable thing you can do is get a strong command of *nix and solve a lot of real-world problems with it. Make it your desktop OS and spend a lot of time with it. Build yourself a router. Make a RPi do something to improve your life. Take some cloud service you use and figure out if you could do it the libre way yourself. Build your own NAS. Bonus points if you learn vim or emacs while you're doing it. Don't worry too much about the meta-narrative about the culture associated with each of the languages. The surveillance state is being built with python, but a lot of hardware hackers prefer python too. Ruby is praised for its flexibility, but its most successful project is literally called "Ruby on Rails" because it tells you exactly how to do everything. The way people feel about languages goes in cycles, so it's good to be aware of it, but you can mostly ignore it. Use the best tools for the job. If the job is making computers do things, the best tool is unix :)
by anon9001 2019-12-12 11:18:59 | link | parent | submission
Some proprietary protocols are easier to implement than open ones. The other day I wrote against one API server that just accepts messages in json format over an ssl socket. It was maybe 3 or 4 lines of python? (Not counting setting up the connection) and probably wouldn’t have been bad in C. For the shell you could use OpenSSL s_connect and something to generate json (awk or echo would probably be enough since it was all flat dictionaries of strings.) > There were small errors in their documentation of how things actually worked. that sucks, but I’ve run into that with people speaking “http” and not just special protocols.
by swiley 2019-12-12 11:16:36 | link | parent | submission
"Technique #5: Use a terrible protocol Debugging is boring. Wouldn't you rather appeal to customers who write bug-free code on the first try? To really show disdain for your customers, use a proprietary protocol so that language support is limited to the client libraries you provide, preferably as binary blobs that are never updated. If you design it carefully, a proprietary protocol can be difficult to understand and impossible to debug, too. Alternatively, you can use SOAP (Simple Object Access Protocol). According to Wikipedia, SOAP "can be bloated and overly verbose, making it bandwidth-hungry and slow. It is also based on XML, making it expensive to parse and manipulate—especially on mobile or embedded clients" ( https://en.wikipedia.org/wiki/SOAPjr ). Sounds like a win-win!" I remember when the early days of Amazon S3, trying to write Bourne shell scripts using only common UNIX utilities to interact with the servers, instead of scripting languages with libraries like Perl, Python, Ruby, etc. This is how I interact with HTTP servers normally. I never have any problems keeping things simple and dependency-free. To do this with S3, it felt nigh impossible. There were small errors in their documentation of how things actually worked. It felt like they were intentional just to trip me up. I know they were not. The official recommendation back in those early days of AWS was to use the options provided by Amazon, each of which required use of scripting languages with libraries. One was SOAP.
by 3xblah 2019-12-12 11:10:25 | link | parent | submission
Here are a few more: * Aggressive rate limiting that makes it hard to use the API in real world situations. Especially if there is no documentation about what the limits are. * Throwing errors with no explanation about what went wrong or how to correct it. Especially effective if given two very similar requests, one succeeds and the other one fails. * Throwing errors randomly / when under load / when not under load / based on the phase of the moon. * Having absolutely no example code anywhere in the documentation. * Requiring hundreds of lines of code to even establish a connection to the API. * Requiring a specific client library to access the API. Extra points if it's Windows only or requires a specific out of support version of Python 2.
by rwmj 2019-12-12 10:44:11 | link | parent | submission
Last 5 email alerts sent for ios on Hacker News
> Mac is kind of not there in the developer community, unless you're talking about shops that deal primarily in the iOS space. Not in the West. Go to any conference, Java, Rails, JS, Rust, etc, and the share of Mac is widely more than its average market share (like 40% vs 5-10% of the overall market). And when it comes to presenters and "star devs" it's way too much Mac share...
by coldtea 2019-12-12 14:23:34 | link | parent | submission
OSMand+~ is a much faster smartphone client, that cares about your privacy and doesn't spam you with ads. You can find it on osmand.net, the F-Droid repository and the Play/ios store.
by tomcooks 2019-12-12 13:57:07 | link | parent | submission
Basically irrelevant on platforms other than the web—no iOS user will think you app is dated because it looks like the apps Apple makes, ditto Windows, Android, and so on—and even there I’ve seen this repeated as folk-wisdom but have never seen anyone back it up with data, nor go a step farther and find out exactly how trendy and how expensive a GUI needs to be for a given audience not to bounce (it seems, surely, this would not be constant) Meanwhile lots of successful sites don’t follow design trends the way every designer and startup seem to want to, and others do spend a lot of money fucking up their UI but not in a trendy or attractive way, particularly, and are successful nonetheless. I suspect “minimum viable design” is much cheaper and more straightforward than a lot of what’s being treated as necessary to attract users, and has less to do with following trends or cute animations or whatever than simply appearing to have been updated some time later than 2005.
by shantly 2019-12-12 13:44:57 | link | parent | submission
StreetComplete is fantastic, alas there's no iOS version. And after a while getting used to it you may want to configure the "questions" StreetComplete asks you. Better not answer every type of question than get discouraged or fed up with the umpteenth "is there street lighting?". Of course, if you can stomach being bombarded with questions every few meters, more power to you!
by Tomte 2019-12-12 13:43:49 | link | parent | submission
Last 5 email alerts sent for ruby on Hacker News
Well, I think a book to motivate you and get you out there, making something the quickest, is RE:WORK. It was written by the Ruby on Rails creator and his co founder at Basecamp. It’s pretty different than most books and most ideas are 1-2 pages long and I found that it got me motivated to build faster than any other book.
by elamje 2019-04-20 17:05:49 | link | parent | submission
> everything compiles to the same assembler code so lets not pretend these languages are doing magical things. They don‘t and many of the dynamic features of Python (and Ruby) cannot be efficiently compiled. That‘s why it relies heavily on C modules.
by quonn 2019-04-20 16:01:12 | link | parent | submission
>It's also why you don't get multi-line lambdas. Everything is a compromise. Arguably too much of a compromise. This is just Guido doing a "because I say so" and imposing his bias against functional programming. Give me a properly-designed language like Ruby any day over Python's bag of compromises.
by cutler 2019-04-20 15:30:56 | link | parent | submission
by kijin 2019-04-20 14:58:07 | link | parent | submission
by ksec 2019-04-20 14:38:31 | link | parent | submission
Last 5 email alerts sent for bitcoin on Hacker News
David Gerard presents himself differently in his book than how he acts in real life. In the words of Brendan Eich, David is a "real piece of work". For the record, we are taking about the guy who created the Bitcoin article on RationalWiki back in 2011. PROOF: https://rationalwiki.org/w/index.php?title=Bitcoin&oldid=819... He's also an admin on Wikipedia and claims to be a "subject matter expert" on cryptocurrency which he uses to get around his obvious conflict of interest. Here's what he had to say about Ethereum: >I believe Ethereum's codebase is based on the Bitcoin codebase (though I don't have a cite), so it's a fork of that (as most altcoins are). Hypothetically you could do blockchain software that wasn't, but that's not relevant here. The actual blockchain generated from this is separate - David Gerard (talk) 12:10, 19 March 2016 (UTC) Yeah, a real subject matter expert who can't be bothered to do two minutes of research. Stay away from this clown.
by Acrobatic_Road 2019-12-12 03:45:08 | link | parent | submission
You can not see that the mcap doubles in 10 years if you check the current mcap. You just explained why the mcap is pointless. Everyone know USD gets printed. Inflation isn't a secret either. No one need to check the mcap daily over several years to see that. People may wanna know the inflation rate in % per year or how much money was printed in the last year etc. The current mcap however says nothing. The history may show a trend but that's something completely different and the actual current value isn't relevant do determine a trend. Also you totally bullshitting neither Interactive Brokers nor Yahoo Finance shows mcap for any fiat or precious metals. It however show mcap for cryto (Yahoo) but that doesn't make it a useful value. Its most probably just copied form CoinMarketCap. >For any stock you click, you see its market cap. Sure, but I said fiat/gold not stocks. The whole discussion started because people think the mcap of bitcoin somehow makes bitcoin or other cryptos comparable to stocks or fiat/gold. But that's nonsense because the value itself has no meaning for these assets. Again, no none care that you think I have no clue. Bring arguments/sources for your claims or stop wasting everyone's time.
by noxer 2019-12-12 00:23:55 | link | parent | submission
> Distributed = No central system at all I can have a single non-distributed computer system and have that single computer be decentralized by having the programs it runs be done through achieving consensus. Your definitions are a bit off because Mastadon and Bitcoin are both distributed and decentralized systems.
by miguelmota 2019-12-11 23:49:08 | link | parent | submission
I'll just pick one. Matrix is resource-heavy, even in the best of times. There is a confluence of smaller contributing factors, and several effects on both its usability and developer/platform friendliness. Consider this anecdote of the overhead of sending the message "hi" as a baseline: in Matrix, this takes about 1KiB on average after all is said and done in JSON (and varies, unfavorably). So to over-simplify, that's 1K to the disk, 1K to each server in the chatroom, for which there could be several hundred, and 1K read on query from clients and other servers -- all for a 2 byte payload. Before I'm accused of being unfair because there's always going to be a ridiculous ratio for something like a "hi" message, consider some alternatives for both the format and payloads of the fundamental protocol primitives. Most of the messages contain cryptographic metadata which has a succinct binary representation, but JSON requires base64; when represented naturally with CBOR the overhead can be reduced ~40%, and without encoding/decoding either. Consider that virtually all of the overhead in "hi" is either cryptographic hashes or signatures or integers (depth/ts) which would benefit from compact formats. Does representation actually matter though? Why can't I just store Matrix in my format and federate with your format? I guess this is an example of where theory and reality collide. One cannot write a server which handles messages as abstractly as possible (leveraging these JSON/CBOR extensible formats) while at the same time knowing which fields have a more efficient binary representation and transforming that. In reality that just looks like a CBOR message with a bunch of base64 plaintext strings. It doesn't achieve that 40%. All of this is important because of Matrix's (superior) design over its event abstraction. When the whole protocol is hinged on fundamental primitives (a good thing) attention to detail and focus must be given to those primitives. The more optimized they are, the more everything built with them is also optimized. When the foundation is efficient, developers can do more with matrix and enrich the user experience. For another example, matrix has been reluctant to give developers the power to store shared-program (i.e bot) state efficiently in a chat room. This is because there's no mechanism to delete state_keys, or even discard overwritten state itself. That's an important cornerstone that's missing while the rest of the tower is being piled on. It might be possible to confuse any of these issues as trivialities, and their solutions as bike-shedding. I contest their importance is evident in how they emerge to shape the character of the entire system. Consider: does matrix require the entire DAG to be acquired like a bitcoin block-chain? Then I better be careful and conservative about messages. Does matrix allow gaps in the DAG and have a smaller chain for auth? Then I can be liberal about messages and delete stuff later. When I go to build an application which communicates over Matrix those qualities emerge as fundamental limitations or liberations of what users can experience.
by jasonzemos 2019-12-11 23:44:39 | link | parent | submission