Brian Armstrong, Coinbase’s co-founder and CEO, has revealed the majority of traders on Coinbase Pro are now institutional investors. Speaking to Fred Wilson, co-founder of Union Square Ventures and investor…
Brian Armstrong, Coinbase’s co-founder and CEO, has revealed the majority of traders on Coinbase Pro are now institutional investors. Speaking to Fred Wilson, co-founder of Union Square Ventures and investor…
Digital asset markets jumped in value over the last 24 hours as most cryptocurrencies have gained 8-13%. Since our last markets update the entire cryptoconomy has increased by $33 billion and this Sunday bitcoin cash (BCH) lead the top ten pack once again with a 12.9% gain in the last day.
Just two days ago short positions and crypto bears managed to scale back the prices of many coins. During the start of the weekend, however, prices managed to start climbing again and many digital assets regained some losses. On May 19 the trend changed for the better and a large portion of digital currencies started to break out and gather some decent gains. This Sunday, bitcoin core (BTC) is just below the $8K zone with an average price of around $7,908 per coin. BTC has gained 7.8% over the last day and around 10.4% over the last seven days.
The second largest market cap still belongs to ethereum (ETH) and its markets are up by 6.6%. Each ETH is trading for $252 per coin and the market has gained 32% over the last week. Ripple (XRP) is up around 6% as well and each XRP is swapping for $0.39 at press time. XRP markets are up 26% over the last seven days and just started seeing some stronger gains over the last two weeks. Lastly, litecoin (LTC) is up 6% today but only 5.5% for the week with each LTC swapping for $92 per coin.
Bitcoin cash (BCH) is currently trading for $404 per unit and is up 12.9% over the last week. BCH has a market cap of around $7.1 billion and global trade volume today is $2.6 billion. The decentralized cryptocurrency still holds the fifth highest trade volume today above eos and below LTC. Tether (USDT) is the top pair traded with BCH on May 19 as 49.9% of BCH trades are against USDT. This is followed by BTC (24.4%), USD (9.9%), KRW (9.9%), JPY (2.1%), and EUR (1.5%). The top five exchanges trading all the BCH volume includes Coinbene, P2pb2b, Bitmart, Binance, Huobi, and Hitbtc.
Looking at the 4-hour chart for BCH/USD on Kraken shows bulls gathered strong momentum during the early morning trading sessions but are currently facing big resistance. Most 4-hour oscillators show either impartial readouts or a bullishness. At the moment the Relative Strength Index (RSI ~58.28) is below overbought conditions and is very neutral at press time. Stochastic shows a similar readout (~80.41) and MACd levels (~33.20) show there’s room for more price improvements in the short term.
The two Simple Moving Averages (SMA 100 & 200) show there’s still a decent gap between the short term 100 which is above the longer term 200 SMA trendline. This indicates that the path toward the least resistance is still the upside even after bears nudged the price down two days ago. Order books show BCH bulls will meet strong resistance levels between the current vantage point and the $430 range. There’s more resistance at the $455 zone as well if bulls manage to climb higher. On the back side, bears will see pit-stops between now and the $375 zone. Alongside this, there’s a string of foundational support around the $340 region as well.
It has been an odd week for cryptocurrency markets as there’s been some decent volatility taking place from time to time. BTC/USD, BCH/USD, and ETH/USD short positions were pretty high on Saturday but most have been squeezed since then. As unexpected as the last drop was two days ago, the 10-15% rises on Sunday was also a surprise. And still, top markets have shown room for growth as foundational supports held perfectly during the price pullbacks. People have also been watching for big whale movements and there were two large transactions on May 19 totaling $38 million worth of BTC according to the Twitter account Whale Alert. It’s safe to say the last dump shook traders up with uncertainty and whether or not the bull run will continue is still debatable.
Where do you see the price of bitcoin cash and the rest of the crypto markets heading from here? Let us know what you think about this subject in the comments section below.
Disclaimer: Price articles and markets updates are intended for informational purposes only and should not to be considered as trading advice. Neither Bitcoin.com nor the author is responsible for any losses or gains, as the ultimate decision to conduct a trade is made by the reader. Always remember that only those in possession of the private keys are in control of the “money.”
Images via Shutterstock, Trading View, Bitcoin.com Markets, and Coinlib.io.
Want to create your own secure cold storage paper wallet? Check our tools section. You can also enjoy the easiest way to buy Bitcoin online with us. Download your free Bitcoin wallet and head to our Purchase Bitcoin page where you can buy BCH and BTC securely.
The post Markets Update: Bitcoin Cash Jumps Ahead as Crypto Prices See Fresh Gains appeared first on Bitcoin News.
“The important question is not that is decentralized or centralized. It’s that the potential for a monopoly is impossible.”— Noam Levenson
Does Decentralization Reduce the Likelihood of a Monopoly? was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
By CCN: The sudden swell of Bitcoin’s price to $8,000 USD this year shows global recession fears are mounting. This according to Michael Hartnett, the chief investment strategist at Bank of America Merrill Lynch. Interestingly enough, Hartnett says global investors are investing in bitcoin not because they see it as a safe haven asset in times of instability but because they are seeking high-risk, high-reward investments. He argues the effect of low-interest rates since 2008 on bond yields has left investors starved of profits, sending them into a global “greed trade” across corporate, emerging market, and crypto securities. Bitcoin’s Price
The post Bitcoin Price Boom Signals Massive Dystopian Panic Over 2019 Recession: Analyst appeared first on CCN
By CCN: Oscar winner Julia Roberts is one of the people who’s never watched “Game of Thrones” and like many others, she doesn’t plan on watching the highly-anticipated season finale either. Hollywood’s darling managed to make prostitution tolerable on the silver screen when she starred in “Pretty Woman.” She may not be watching, and nor will those who are bemoaning the entire season as being messed up. However, millions of others will be fixated on HBO’s most popular show ever. They may not even show up for work! Even Julia Roberts has a hooker is more classy than she is! pic.twitter.com/2bZzVbeosT
The post Game of Thrones Has ‘Too Much Sex’ for This Hollywood A-Lister appeared first on CCN
Privacy in the online space is quite compromised these days and anyone who would like to protect their own may consider using a VPN service. There are many platforms on the market and some are catering to the crypto community. Cyberghost VPN is one of them and it takes bitcoin cash (BCH).
Beside the need to safeguard personal data, which is often exposed in industrialized societies, there’s also the issue with restricted access to information, typical for nations under authoritarian regimes with heavy state-sponsored censorship programs. Other barriers include various geolocation restrictions limiting the availability of certain services in some markets.
VPN (virtual private network) providers help you overcome the challenges on these fronts and their services enjoy a growing popularity in the expanding crypto space. At this stage, not all of them accept digital currency payments, which add another layer of security, but there are some notable exceptions. These include Express VPN and Private Internet Access.
Cyberghost is another crypto-friendly platform, which has been credited for its simple to use software. It is a good choice for beginners. Cyberghost shields your private data and protects your online identity from hacking attacks and other encroachment attempts.
The service helps you stay safe on public wi-fi networks and hides your real IP address when surfing from your home. It uses encryption and maintains a no logs policy. The VPN also facilitates more secure financial transactions including online banking.
Another strong side of Cyberghost is that it has almost 3,700 servers in over 60 countries and offers unlimited bandwidth and traffic. Its major advantage for cryptocurrency users is the option to pay with digital coins for its services. An integration with Bitpay allows you to spend your bitcoin cash for one of its subscription plans which currently start at $2.75 a month.
If you are looking for other products and services to buy with your BCH, check out Bitcoin.com’s Spend Bitcoin Cash page. The online store allows you to shop for apparel and bitcoin branded accessories, purchase gift cards for major brands and retailers or buy a hardware wallet with discount.
Are you currently using a VPN service and how do you pay for it? Share the details in the comments section below.
Disclaimer: Readers should do their own due diligence before taking any actions related to third party companies or any of their affiliates or services. Bitcoin.com is not responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any third party content, goods or services mentioned in this article.
Images courtesy of Shutterstock, Cyberghost.
The post Cyberghost Is a VPN Service You Can Pay For With Bitcoin Cash appeared first on Bitcoin News.
“All media work us over completely. They are so pervasive in their personal, political, economic, aesthetic, psychological, moral, ethical, and social consequences that they leave no part of us untouched, unaffected, unaltered.”
– Marshall Mcluhan
Imagine taking out your cell phone and pulling up Facebook or Twitter, and one of the first things you see on your screen is a commercial about a product. What do you do? Do you simply ignore it and scroll down? Do you take a look at it with an open mind (maybe because you see that Cody and Tucker have liked the advertiser)? Or do you click on the icon in the top right corner and choose to hide the ad?
Anyway, now that you have either ignored or dealt with the pop-up ad, you get on with browsing and after seeing only 4 or 5 posts, there it is, another sponsored ad shows up.
Sometimes, the advertisement is actually interesting. Once you accidentally watch the ad for a couple of seconds or click on it, ads like this start showing up on your screen every time you visit the page.
Social media services provide us with plenty of potentially useful and interesting information, catering to our personal interest. But because they are so tailored to our behavior, what we get is an alternate reality built around what these platforms “think” of us.
Much of the internet is like this. With the help of cookies, sites remember our preferences during our visit, thus providing “predictions” for our behavior in the future. While this provides convenience, our perception of things is being shaped by things that are brought to our attention more or less without our consent. We have the option to hide them or ignore them one by one, but we can’t change the basic rules, which ensure that everything we see is tailored to us. This is such a significant departure from offline reality which most people fail to appreciate, even as the internet continues to grow in importance in everyone’s life.
Offline, we encounter the world as it exists on its own terms. But on the internet, different platforms are trying to project images and information at us, wrapping us in an alternate universe. This can even extend out of the virtual realm and impact our real-life actions and behaviors, if we aren’t careful and mindful.
When we go into the nature, a bookstore or a shopping mall, we are intentionally seeking something new and are open for potential mind disruptions. But, of course, we clearly don’t want that all the time, for example, when we are checking the news feeds or what our friends have been up to. What is intended to be a private moment isn’t supposed to be contaminated by irrelevancies. After all, how much do you enjoy being bombarded by unwanted information while chilling on our cozy sofa?
“Instead of scurrying into a corner and wailing about what media are doing to us, one should charge straight ahead and kick them in the electrodes.”
– Marshall Mcluhan
It seems, however, that there is not much we can do to prevent our online communications and experiences from being trivia-infested. But fortunately, there are still some small things we can do, all of which will be done at a small cost of convenience. It is up to you to decide whether or not it is worth doing. Aside from that, remaining aware and vigilant of the pollution that online ads can do to us is definitely a step in the right direction.
When the marginal cost of serving a new user is low companies tend to grow big. When the additional cost of serving new customers does not matter it makes sense to split the fixed cost between a bigger number of customers . This is pretty standard micro economy.
When the service is automated the marginal cost can be really low, computers can do some things really efficiently. Internet also provides zero cost automated distribution. But in information technology we can also have network effects — where serving new customer improves the utility for other customers — this is as if the marginal costs were negative.
This leads to a new kind of business which outsources most of the service and provides by itself only the part that can be automated (usually customer interface). This is the easiest way to grow big. If the outsourced part is fully standardized you can have a digital marketplace business like Uber or AirBnB. This is good place to be — you have commoditized your complement. It is even better when the non-automated part of the service is provided by the users themselves — like content in social networks or web search. But then you need an additional part of the business to be the profit centre — advertising usually. Fortunately this also can be automated with self-service for the advertisers. These are called Supper Aggregators (or Level 3 Aggregators) by Ben Thompson — everything automated.
Digital marketplaces are usually Level 2 Aggregators — because they cannot automate “bringing suppliers onto their platform”. For example Uber needs to do background checks and vehicle verification for each new driver. But this is not universal rule, they can also be Supper Aggregators.
There are also Level 1 Aggregators — where the non-automated product/service part is not standardized enough for a marketplace, but is still can be outsourced. This is the case of Netflix buying content, and then serving it in a fully automated self-service.
There are additional complications.
AirBnB is a better aggregator than Uber — because customers in hospitality market come from all over the world and need something that will be trusted (and recognized) globally — while taxi users are mostly from local population. A local taxi company can easily compete with a global one, but short term accommodation needs a global marketplace.
Drivers (for Uber and Lyft) and apartment/room owners (for AirBnB and Booking.com, Expedia and others) are not bound to any marketplace and can choose between offers. This is a problem for the aggregators because it lets their providers commoditize them. In hospitality market there are Channel Managers which are kind of aggregators of aggregators — they aggregate the marketplaces as marketing channels for hospitality providers. It would be very useful to analyse when that can happen and especially when you can automate it like the Channel Manager software does.
But not every startup is an aggregator — sometime they can automate such a big part of the main job that they have no need for outsourcing. For example Codility, a company I have some shares of, found a way to automate evaluating programmers and does not outsource any part of its main job. But this might change as creating the programming tasks cannot be automated and so it might be useful to outsource it.
“Truly, we are now in a new world where the old certainties are melting away and we have to learn to think and act differently. We have to interact with these uncertain processes, which affect our health, our food, our weather, our standard of living.”
— Brian Goodwin, 2001
It is now almost ten years since Brian Goodwin (1931–2009) a founding member of the Santa Fe Insitute died. I owe a huge debt of gratitude to Brian because the MSc in Holistic Science he created and taught me on has deeply influence my life and work ever since. Brian’s call for a ‘Science of Qualities’ is gaining in critical significance as the converging crises of climate change, resource depletion, run-away tech, genocidal economic dysfunctionality, and mass extinction are pointing towards civilisational collapse.
“A science of emergent qualities involves a break with the positivist tradition that separates facts and values and a re-establishment of a foundation for a naturalistic ethics. Participation now enters as a fundamental ingredient in the human experience of any phenomenon, which arises out of the encounter between two real processes that are distinct but not separable: the human process of becoming and that of the ‘other’, whatever this may be to which the human is attending. In this encounter wherein the phenomenon is generated, feelings and intuitions are not arbitrary, idiosyncratic accompaniments but directs indicators of the nature of the mutual process that occurs in the encounter. By paying attention to these, we gain insight into the emergent reality in which we participate.”— Peter Reason & Brian Goodwin, 1999
Brian emphasised that emergent properties can only be fully understood and identified “by their qualities, which are expressions of the coherence of the whole.” He defined emergent properties in complex adaptive systems as “unexpected types of order [novel coherent patterns] that arise from interactions […]. Something new emerges from the collective — another source of unpredictability in nature.” Brian’s suggested that “the complex systems on which our lives depend — ecological systems, communities, economic systems, our bodies — all have emergent properties, a primary one being health and well-being” (Goodwin et al., 2001, p.27). This formed the basis of my own doctoral research into a Salutogenic Design approach to the complex ‘wicked problems’ associated with creating sustainability and systemic health (Wahl, 2006a).
So how do we aim for appropriate participation by designing for positive emergence? How do we design for human and planetary health? To do so in ways that are elegantly adapted to the bio-cultural uniqueness of place will require us to pay attention to the qualitative aspects of interactions, relationships.
We need to nurture those dynamics which affect whether the nested holarchy (Koestler, 1969) of interdependent complex systems — individuals, communities, ecosystems, bioregions, the biosphere — increases in health, resilience and adaptive capacity and disincentivize or design-out those that are degenerative and drive diversity loss — increasing fragility and decreasing viability.
Maybe a science of qualities is best put into action as a global-local (glocal) practice of regenerating Socio-Ecological-Systems (Young, et al., 2006) and Planetary Health (Whitmee, et al., 2015)? Maybe paying attention to our experience of the qualitative relationships that link us to each other and to the planetary life support systems would inform such a qualitative science of planetary healing?
Maybe once we understand that health is a scale-linking emergent property that — given the right conditions — emerges from the relationships, interactions and information flow in the nested complexity we participate in, we will better learn to sense, feel and intuit our way into how to facilitate the emergence of positive systems properties — like health and wellbeing — in humble acceptance that we cannot predict and control these systems?
Nora Bateson’s work on ‘warm data’ offers an important contribution to the emerging science of qualities. She warns that “Utilizing information obtained through a subject’s removal from context and frozen in time can create error when working with complex (living) systems. Warm data presents another order of exploration in the process of discerning vital contextual interrelationships […]”, and defines warm data as “transcontextual information about the interrelationships that integrate a complex system.” Paying attention to this ‘warm’ qualitative data offers “another order of exploration in the process of discerning vital contextual interrelationships” (Bateson, 2017). Nora Bateson suggests:
“To address our socio-economic and ecological crisis now requires a level of contextual comprehension, wiggly though it may be to grok the inconsistencies and paradoxes of interrelational process. Far from solving these dilemmas or resolving the conflicting patterns, warm data utilizes these characteristics as its most important resources of inquiry.”
— Nora Bateson, 2017
Warm data describes the interactions, relationships and information flow in complex adaptive system from which health and wellbeing can emerge as positive systemic properties. Brian’s call for a science of qualities was motivated by the need for such an approach to inform appropriate participation in the ongoing process of life as a planetary process. He understood that a precondition of this process to continue was to pay closer attention to how our human agency in these nested systems either contributes to regenerative or degenerative patterns.
The most adequate qualitative indicator for judging whether we are participating appropriately is the emergence of health, resilience, adaptive capacity and well-being at the different scales of the living holarchy of nested complexity that supports life as a planetary process.
There is no destination sustainability or destination regenerative cultures — as some kind of end point we arrive at to live happily ever after — rather, the path towards human, community, ecosystems and planetary health and wellbeing is a continuous process of exploration, transformation and dilemma navigation. Donella Meadows reminded us in a posthumously published paper entitled ‘Dancing with Systems’:
“… there is plenty to do, of a different sort of “doing.” The future can’t be predicted, but it can be envisioned and brought lovingly into being. Systems can’t be controlled, but they can be designed and redesigned. We can’t surge forward with certainty into a world of no surprises, but we can expect surprises and learn from them and even profit from them. We can’t impose our will upon a system. We can listen to what the system tells us, and discover how its properties and our values can work together to bring forth something much better than could ever be produced by our will alone.We can’t control systems or figure them out. But we can dance with them!”
— Donella Meadows, 2002
In my own work as a regenerative development consultant and educator, as well as my masters thesis on ‘Exploring Participation: Holistic Science, Sustainability and the Emergence of Healthy Wholes through Appropriate Participation’ (Wahl, 2002), my PhD on ‘Design for Human and Planetary Health: A Holistic/Integral Approach to Complexity and Sustainability’ (Wahl, 2006a & 2006b) and my book ‘Designing Regenerative Cultures’ (Wahl, 2016) — the central focus has revolved around one question:
How might we redesign how we meet human needs in ways that support and heal rather than exploit and degrade the biotic community and the planetary life support systems upon which our future depends?
A culture emerges out of the complex web of qualities of the interactions, relationships and information flows that are conditioned by its history and the bio-cultural uniqueness of the place it inhabits. What kind of awareness and capabilities set the conditions for a culture to have a regenerative impact on entangled social and ecological systems?
How do we accept uncertainty and the uncontrollability of complex systems and humbly attempt to design for — or facilitate — the emergence of health, resilience and adaptive capacity of the complex Social Ecological Systems (S.E.S.) we participated in? In a 2007 paper on ‘Scale-linking design for systemic health’ (Wahl, 2007) I suggested:
“The health and wellbeing of individuals, communities, cities and societies depend critically on the resilience and health of ecosystems and on vital ecosystems services that are provided by ecological processes within the biosphere. Therefore ,one overarching goal of design for sustainability should be to improve and maintain human, ecosystems, and planetary health. […] sustainable design is by necessity scale-linking and salutogenic (health-generating) design across all scales of the complex dynamic system that joins nature and culture, as well as global, national, regional and local scales.”
The notion of salutogenic design for planetary health based on an understanding of health as an emergent property of nested living systems was still very new in 2006. It challenged many academic silos when I applied for post-doctoral research funding. Yet in recent years the Planetary Health Alliance (2019) has grown to over 140 member institutions in 30 countries. The link between human health and ecosystems and planetary health has gained the attention of researchers and policy makers around the world.
“While more food, energy and materials than ever before are now being supplied to people in most places, this is increasingly at the expense of nature’s ability to provide such contributions in the future and frequently undermines nature’s many other contributions, which range from water quality regulation to sense of place. The biosphere, upon which humanity as a whole depends, is being altered to an unparalleled degree across all spatial scales. Biodiversity — the diversity within species, between species and of ecosystems — is declining faster than at any time in human history.”
— ‘International Science and Policy Platform on Biodiversity and Ecosystems Services’ (Diaz et al., 2019)
Alarming recent reports like the one quoted above are beginning to create widespread awareness that we are facing the real and present danger of short- to mid-term human extinction. The question of how to design for the emergence of systemic health across scales in order to support human and planetary health is now more important than ever. Brian Goodwin’s call for a science of qualities has gained in significance in this context.
We have to be mindful as we now search for wise responses with an unprecedented urgency. Such responses will need to be informed by both the best of quantitative science and the new qualitative approaches to science we now urgently need to develop.
The science of planetary healing and ecosystems restoration and the art of regenerating our communities and bioregions will have to pay renewed attention to the qualitative aspects and beauty of our profound interdependence and interbeing with the wider community of life.
[This is an adapted section of an essay I am still working on, as a contribution to a special edition of Acta Biotheortica to commemorate the significance of Brian Goodwin’s work on the occasion of the tenth anniversary of his passing.]
[Link to an interview with Brian Goodwin I recorded in 2007.]
Daniel Christian Wahl — Catalyzing transformative innovation in the face of converging crises, advising on regenerative whole systems design, regenerative leadership, and education for regenerative development and bioregional regeneration.
Author of the internationally acclaimed book Designing Regenerative Cultures
Medium: Blog with more than 340 articles
Avoiding extinction: participation in the nested complexity of life was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Jon Christensen and Chris Hickman of Kelsus and Rich Staats of Secret Stache conclude their series on Bret Fisher’s DockerCon 2019 session titled, Node.js Rocks in Docker for Dev and Ops.
Some of the highlights of the show include:
Links and Resources
Rich: In episode 60 of Mobycast, we conclude our series on Bret Fisher’s DockerCon session, Node.js Rocks in Docker for Dev and Ops. Welcome to Mobycast, a weekly conversation about cloud-native development, AWS, and building distributed systems. Let’s jump right in.
Jon: Welcome, Chris. It’s another episode of Mobycast.
Chris: Hey, Jon. Welcome. It’s good to be back.
Jon: It’s good to have you back. We’re doing part two today of the talk that you listened to at DockerCon that was one of your favorites from Bret Fisher and that Node.js Rocks in Docker for Dev and Ops. This is the part two of that.
Last week, we talked about just some Node Dockerfile best practices. Maybe you could just give us a quick recap on what we covered there and then we’ll jump into the rest of the talk. There’s a lot more good stuff.
Chris: Absolutely. First, I was hoping we do our, “What you’ve been up to?”
Jon: Oh, absolutely.
Chris: Just because I have something I definitely just want to talk about.
Chris: It’s interesting news. Just recently, Docker issued a press release saying, “Steve Singh is stepping down as CEO.”
Jon: We’re just talking before the podcast started and I didn’t know you have this secret.
Chris: Yeah. Keep it fresh and spontaneous. Not terribly shocking news but pretty interesting for a whole bunch of reasons. This is a week after DockerCon is when they issued a press release. It’s like, “Hmm, probably would have been nice to do this before DockerCon,” then, help make that introduction, if you will, and have the transition. According to Docker, the deal wasn’t fully in place. That’s the reason why they didn’t do that.
Jon: It didn’t start with the huge security. They had three days before DockerCon.
Chris: Yeah. All these stuff factors into it, like what’s really happening behind the scenes. The official word from them is they’ve been talking about this for months and it wasn’t finalized until after DockerCon. But who knows? The person they’ve tapped to come in in DCO is Rob Bearden who is former CEO of Hortonworks. Also, a few other open source companies like SpringSource and JBoss.
Jon: Wow. Somebody who speaks the language of an enterprise for sure.
Chris: Yes and from an open source standpoint. It’s interesting there. Again, I’m just going to reiterate this. I’m just going to lay down the prediction. Docker is going to be acquired by VMware. VMware is going to overpay for them, but they’re going to do it. This could be a one-two punch for VMs to containers with now […], the Kubernetes founders and then Docker.
I think it’s actually going to be a win-win for both of the companies. I think Docker is definitely in a spot about here, really struggling to find its footing and what that business model is. Acquisition is really the only exit for them. They keep talking about like, “Oh, yeah. We can go IPO and we’re going to be cash flow positive by the end of the year.” I just don’t see that.
Jon: I’d love to disagree and have a little bit conflict on the show but that sounds right to me.
Chris: Yeah. We’ll see. I would not be surprised if within six months, there’s an acquisition.
Jon: I can’t remember the name of the CEO you said, but it would be interesting to see what kind of ties he has to VMware.
Chris: That’s interesting that you bring that up. He was COO at SpringSource and that was acquired by VMware for $420 million in 2009. Hortonworks, apparently, he was there when they went public and they were acquired by Cloudera last year. He’s definitely got the experience and the chops. He got the connections side of things. Again, VMware, I think this is the next acquisition.
Jon: It does makes sense. It would make VMWare the source of VM. That’s where you go for operating systems that are not operating systems.
Chris: Yeah. Just the whole enterprise space, the hybrid cloud space, it actually makes sense. I think from an ecosystem, like this acquisition, makes sense for them and I think they will overpay for it. From just a pure revenue business standpoint, they will make something of it, hopefully. The value they place on it is not going to be upon Docker standalone value. It’s going to be that synergistic value that someone in VMware can extract from it and build on it.
Jon: Right. Very cool. Now, we can go in to our recap of part one.
Chris: In part one of this, we got through basically talking about just Dockerfile best practices for Node.js applications. We talked about base images, that’s what you’re starting with, what are some guidelines there for how to go build the best Dockerfile. We talked about Node modules and making sure that’s not included in your image. We talked about especially with native code being compiled sometimes. In Node modules, it’s really important that you’re building that for your target platform correctly. And then we finished up talking about least privilege principles and specifically taking advantage of the built-in node user that comes with the official Node.js Docker images, that are there but they’re not enabled by default. You have to do some work there to actually switch over to use that. That’s what we’ve covered in the first part.
Here in the second part, we’ve got a lot more to cover. I think Bret’s talk was 40 minutes and that was probably the first seven. This is traditional Docker. Every DockerCon I’ve been to, it feels like these sessions are just like drinking from the fire hose. They’re always 40 minutes long but it always feels like, given the amount of material that they’ve given you, is at least 60 minutes if not 90 minutes worth of material.
Jon: Right. Going into the next thing that he talked about, it looks like he talked about Node process management. I’m really curious about this because recently, with the news of the container breakout stuff and at Kelsus Camp a couple of months ago, we went a little deeper. That wasn’t Mobycast, but it was just with our own company. We did some work to learn more about how Unix processes work. They’ve been on my mind recently. I’m curious what do you have to say about process management for containers.
Chris: There’s several subsections to this and all dealing in that space. We’ll be able to dive into that a little bit. It is interesting to just really understand what’s going on here. At the end of the day, it’s processes all the way down.
With that, as far as process management goes for your actual containers, you don’t really need anything extra there. Rely on your orchestrator to do that for you. Rely on Docker, rely on your orchestrator.
With Node.js in particular, it has had so many tools out there for dealing with process management. Things like Forever, Modules, PM2, Nodemon, there’s the Cluster module. There’s always been process management in that space.
Jon: For those that are listening that might not be Node people, I think it’s because Node doesn’t have a management process that keeps things running when there’s an error. If you do make a mistake on your code, the code just shuts down. It just terminates the process.
Chris: Absolutely. There is no built-in process management per se with Node. If you have […] exception, there goes your process. Your server’s done.
Jon: I think people are used to working with applications servers, whether its Java ones, or Python ones, or Ruby ones, are used to that application server being really resilient to errors happening in the components of its code.
Chris: Yeah. The youthfulness of Node versus something like Java or .NET, too. These tools spring up and then you seem more and more just coalescing with maturity at the platform and whatnot.
There’s also these tools that’s been out there for things like doing support and hot reloading of code. You’re developing on your machine, you’re running it, you’re testing it, then you go and changes line of code. You have process manager that is doing things like watching the file system […] changes and when he sees that it crashes the process and restarts it, he picks up those changes that you just made.
His point here was just like, “You don’t need to use this stuff in production on the server. Instead, just let your orchestrator and Docker handle this for you.” We have things like health checks. You can spin up as many of these tasks that you need, let your orchestrator do that, with the caveat that use something like nodemon when you’re developing locally. That makes sense because that’s going to give you things like hot reloading.
Another part of this was just really calling out that, using npm start, that’s an anti-pattern for your Node apps. Do not use NPM to start your app instead you should be starting it directly via the Node process. We’ll get into it a little bit more why that’s the case.
Those are the two main ideas that pushing across there in that particular section of the talk. From there, it went into like, “Let’s talk about shutdown now. What does it mean for a healthy shutdown in a Node application?”
Jon: A Node application in a container, right?
Chris: It is for a Node application in a container. Some of these applies outside the containers as well. Specifically, running inside of a container, it’s running as PID 1. It’s the first process in the system or the container, the initialization. This is typically what’s happening. You’ve dockerized your Node app. You’re spinning up a container based upon that image. This is the first thing that’s running. It’s running as PID 1. It’s the init process.
Just some background information there. The init process in containers has basically two jobs. One is to reap zombie processes. Zombie processes are subprocess that have lost their parent process. It’s also responsible for passing signals to the subprocesses. In general, with Node and Node apps, the zombie processes should be much of an issue. You’re really just running your Node apps unless it’s just spawning a bunch of other processes. It’s just probably not going to be too much of an issue. The signaling is important especially with […] this shutdown comes in the play.
For proper shutdown Docker is using Linux signals to control the apps. This is things like SIGINT, SIGTERM, and then SIGKILL, the force quit, if you will. SIGINT and SIGTERM, these Linux signals are what allows that graceful stop to your application. When you do a Docker stop on a particular Docker process, this is what it’s doing. It’s sending the SIGTERM to that container and then it waits. By default, going to wait 10 seconds for that container to respond to that and shut down. If it doesn’t, then it will kill it. It’ll do that force quit on it.
In order to have a graceful healthy shutdown, you need a container that’s going to respond to these Linux signals. This is where it comes in with NPM. With NPM, it’s not responding to SIGINT and SIGTERM. If you’re using npm start to launch your app, you have a problem here. It’s not going to get processed at all.
Chris: That’s why npm start is the anti-pattern. The recommendation is to not do it.
Jon: If I could just make sure I understand, if you do npm start, then your Node app is running within a process that’s owned by NPM. Then, when you send signals to that same process saying, “Okay, we’re done. Stop,” NPM is like, “I’m busy. I don’t hear anything.”
Chris: Yeah. In that case, NPM is PID 1.
Jon: Yeah. Your actual Node code is maybe a child of that. Okay, got it. Interesting.
Chris: The other thing to keep in mind, Node by default is not responding to SIGINT and SIGTERM but you can with code. You specifically have to add code in there to handle it. This gives rise to what can you do. He outline three possible workaround and solutions here. The first off is a temporary or a band-aid fix. Docker as a version 1.13 has a –init flag. What that does is it wraps your process with a lightweight init system. Basically, it’s called Tini, which is a module. It’s designed to run as PID 1 and to do the right things,
When you’re running your container, docker start, if you use the –init, that’s going to basically use Tini as PID 1 and now you will be responding to the SIGINT and the SIGTERM signals. The problem with that is that you may not have control over this. We run it in AWS but we’re using ECS. We’re not the ones making the docker start and stop commands. It’s actually the ECS Agent.
I haven’t look into what integration the ECS Agent has with Docker and what that then allows to actually specify this kind of argument in your task definition file.
Jon: I bet it does.
Chris: It probably does, but that’s what you could do. Alternatively, if you can’t modify your docker start command then another workaround would be to simply build Tini into your image and have Tini be PID 1.
Jon: You’ll have the last command of your Dockerfile via Tini command?
Chris: That’s pretty straightforward and easy to do. You can fix this PID 1 issue. Probably the best way to do this, definitely the right way to do this is just update your Node app to make sure that it properly captures the Linux signals. Make sure it has handlers for SIGINT, SIGTERM, and SIGKILL. Actually, SIGINT and SIGTERM.
Chris: Because you […] some of those and you don’t need to kill, most likely. In addition to that, just don’t use NPM to start your app. Just call Node directly. Have Node be your PID 1, invoke that to start your app up, and then just process these signals correctly.
Jon: I could be wrong here, but if somebody’s listening they’re like, “I really want to contribute an open source update. I want to contribute something than Node.” This could be an area where you could do something. It feels like there’s some general stuff that pretty much any Node app would benefit from if you just put it into Node itself. I mean, it’s like calling to be worked on. This is like a thing that every single person that ever build a Node app should do this extra code. Why not one person do it at the root of the tree instead of one doing it on their leads?
Chris: Yeah. The good news here is that Node already has the event handling in it. It’s actually really easy to do this and it actually is boilerplate code to add in these event handlers for these Linux signals. What isn’t boilerplate is that for every app, shutdown is something different. The bodies of those event handlers is really up to you. You don’t want someone else doing it. You have to decide like, “I’m shutting down. What cleanup do I need to do?”
Jon: That I agree. It’s so true. Never knew that as a general case.
Chris: Yeah. There’s not a lot of work here to do it. As a standard practice, people don’t think about it. You do have the things like after 10 seconds, Docker’s just going to kill it. Instead of shutting down in two or three seconds, it shuts down in 10 seconds. It’s one of those things that maybe people just don’t scratch their head and ask, “Why is it taking 10 seconds to shut down every time? Why doesn’t Control-C work in the console?”
Jon: Right. Where we go next from here?
Chris: It’s part of this theme of just better shutdown, healthier shutdown. Talked a bit about connection tracking. Basically, you should track your HTTP connections and send them FIN packets when you’re shutting down. This is mostly for people that have connections. When you’re handling one of these signals, you’re going to be calling server close to shutdown your server, by default, Node.js is not going to close keepalive connections when it happens. They’re just going to be just abruptly terminated. Instead, what you should be doing is you should send FIN packets so that these things close in a graceful way. You’ll also want to make sure that you’re stop accepting new connections, existing ones get closed, and whatnot.
There actually is an open source NPM module out there. It’s called stoppable and this is something you can wrap Node.js server object with. What that does is it provides for this really graceful connection handling. It stops when you call it. It stops accepting the new connections. It closes the existing idle connections without killing request that are currently in flight. It allows for this really graceful shutting down, draining of those connections, and shut down.
Jon: I feel like it’s maybe just the Node ethos of super, super minimal and very, very lightweight, making no assumptions about what people might be using it for that has led to this. I come from more of a, “Man, things are a lot easier when you have some opinions in your software.” It bugs me that this is even a thing. It also bugs me that stoppable is a separate module that you need to add. I would just rather have the peace of mind that this is not something that I need to think about. The fact that you have to think about this is almost like showing off. It’s like, “Guess what I thought about, everybody?”
Chris: FIN packets.
Jon: Yeah. I don’t know. Anyway.
Chris: From day one, the ethos Node was it’s going to be very much like Unix system software. Do one thing really, really well and then you can have a community of other tools that build up around it, that built up that ecosystem. I think now, perhaps they wouldn’t want to be so hardcore in that except now, they have things like backward compatibility. If they go and change how this works, then what do they break? They’ll probably break a lot and they’re going to get a lot of flak for that and whatnot. Some of these things, you made the decisions, eight, nine, ten years ago, we just have to live with them now.
Jon: Right. Boy, does it feel to me like a decision that wasn’t actually made but like, “Oh, hey. Guess what we don’t do? We just didn’t get that. We just released. We released early and often,” and that was one of the things that we didn’t think about when we released.
What comes next after everyone remembers to use stoppable for their HTTP connections, particularly their keepalive ones, what comes next?
Chris: There was quite a bit of a discussion in Bret’s talk about multi-stage Docker files and how to use those to basically break out your process into different stages like production versus dev versus test and whatnot. That’s a whole big topic. We’ve touched a bit on this in previous episodes of Mobycast and maybe we’ll do a future one as well, but we’re going to skip over that one for this recap just because it’s such a big topic. We can talk about that for quite some time and we want to keep this at a reasonable length today.
After talking about the multi-stage Docker files, talked a bit about security scanning and auditing. This is one of those things where it’s like a no-brainer. It should just be table stakes. It’s so easy to do. You can do it for free. You can use a paid service, but just get the auditing and scanning as part of your CI process.
NPM has the audit and that’s going through and checking against known security vulnerabilities. You can also do full CVE scanning with tools like MicroScanner from Aqua. It’s really easy just build it into your Docker image or part of your CI process. Just do it. Make sure you’re doing CVE scanning and this is definitely something like on our team we’re going to be working towards and making sure we’re doing this.
Jon: Yeah. I think a great way of finding yourself not doing it is if you ignore one of those NPM audit messages three times, then you’ll ignore it forever. Don’t let yourself do that three times in a row just like any kind of code compiler warnings. I’m sure everybody’s has been on a project where the first time you compile it, you’re like, “How are we living with 5000 compiler warnings? How did this happened?” It’s the same kind of thing and it’s more serious when there’s security audit even then when there are just compiler warnings of code that should probably be written differently.
Chris: Yeah and even just knowing, just seeing the results of it. If you run these scanners, you’re going to see quite a few things. Some of those are going to be things you can change and some of those are going to be stuff you can’t do anything about because it’s coming from dependencies. But just having the knowledge of knowing what is the service are here, what’s going on, or the code that we’re writing in particular, and the dependencies we have control over. Just knowing that, just having that information, and then you can make that decision online, how you strict you want to be. Do you really want a failed build or do you want to continue on? Just knowing or having that knowledge is what is […] here.
Jon: It’s wild how fast those audits get updated. I’ve been writing some Node code myself for the past few months and just been blown away. It cleared out all my security audits which always requires a little bit of work, a little bit updating, and then two, three, or four days later it’s like, “Woah, there’s another one.” It works. They really can track.
Chris: Absolutely. It’s done at the CVE level where it’s coming from across all different software packages and whatnot. They’re also doing more security evaluations of the actual modules themselves inside NPM which is where they’re actually doing security audits and flagging issues.
Jon: And then it’s important, too, in places like Babel when you’re doing transpiling and you’re actually letting something touch every piece of your code. It’s so critical to make sure there’s not malware or something like that.
Chris: Yeah or an NPM module that you’re installing doesn’t got and grab passwords or grab creds that are in memory and then forwards it along to some proxy or something.
Jon: That was not the point to single out Babel. It’s just the point like it’s something that has access to everything. Where do we go from security scanning?
Chris: There was quite a bit of discussion just about Docker Compose and having that as part of your workflow, especially how it integrates in with things like Docker health checks. But kind of me just to point calling out myth busting that Docker Compose YAML, it’s version. It would be V1, then V2, and now it’s V3.
One of the myths is that V3 does not replaced V2. V2’s focus is basically on single Node development test versus V3 came out really for the multi-Node orchestration. It’s really for tools like Swarm and Kubernetes. It was the additions needed in order to enable those for things for deployments and managing clusters and whatnot.
This is just a high-level point, just realized that if you’re on V2 of Docker Compose YAML, that’s okay. You’re not missing much. You don’t have to go feel like you’ve got to upgrade to V3. So, something to keep in mind.
Jon: I have some feelings about that but I’m just going to let them go. I’ll just stick with V2 and not think too hard about it.
Chris: There you go. I’ll talk a bit about just Node modules, specifically how you mount this. Sort of volume mounts for […] mounts. Again, we’ve talked about this in the past, poking the hole between the container and the host, what you’re sharing, and finding that right mix between the isolation that containers promise versus the utility, the flexibility for developers to do things like hot reloading or what not.
They’re just various techniques that you can do with no bother to make sure you’re not getting into a situation where you’re using Node modules that were compiled for one target but you’re running on another one. This again, is one of those bigger topics we could talk quite a bit on, so let’s just leave it at that.
The one little tip here that was useful is you’re on a Mac and you are using volume mounts, just use the dedicated decoration inside your Docker Compose file, so when you specify the volume inside Docker Compose, if you just use whatever name you want on it and then call and delegate it, you’re going to get a performance increase. Go and do a search on it. Go look up some more on it. As a protip, you can get some better performance there if you do that delegated right mode on your volume ounce.
Jon: That’s interesting. I think that’s perfect because it dovetails with an episode we did before about not letting things be a mystery. We don’t really have time today to go into exactly how that works and why it gives you that performance increase, but don’t do that without understanding that. Go read about it and then do it.
Chris: Crawl, walk, run.
Chris: The final section on Bret’s talk was about health checks, specifically, how you can leverage Docker health checks via your Docker Compose. Again, we did a whole episode on health checks and […]. We can probably do a whole nother episode just on Docker health checks, compose files, how you set those up, and how you can specify dependencies and conditions based upon what the health checks are, and what the status of the health checks are for the various different services that are in your Docker Compose file.
The point here was just definitely be aware of this. Definitely consider the leverage in it and using it. It’s part of the infrastructure that you’re using with Docker Compose and you should take a close look at it.
Jon: Very cool. I think we can finish up with something I’ve been eyeing this whole time. It’s a checklist and I love a good checklist. What do we have?
Chris: In summary, here’s your production checklist for running Node.js under Docker. One, just make sure you’re commanding Node directly. This, we talked about. Don’t use npm start. Instead, just have Node be your PID 1. As we said, make sure you’re handling, capturing SIGTERM and properly shutting down.
Also, make sure when you’re building, make sure you have a Docker ignore file and definitely make sure things like Node modules is included in there, you’re .git directory’s in there, log files, any other artifacts. Just make sure that whatever’s on your build machine, you’re building images that you’re excluding the stuff that shouldn’t be in that image with the Docker ignore file.
Another part of the checklist is make sure you’re using NPM CI or you can also do NPM-I using the only production command flag. You want the minimal set of code of artifacts when you’re doing a NPM stall. Use those options.
Jon: I’m just curious about that. I’m not as familiar with NPM as I should be. NPM CI or NPM-I only production, I would guess, knows to only put the parts that production needs into production. I’m guessing that whoever’s writing the library that you’re installing with NPM needs to be aware of that and know the difference between what they should put into a production build versus a development build. If they weren’t aware, then there probably is no difference. […] feature, right? Like, did you build support for this into your library or not? I guess that’s my question.
Chris: Yeah. I think this really just applies that you package JSON files. In package JSON, you can have your dependencies and then you have your deb dependencies. Production is just going to install the dependencies, not the deb dependencies.
Jon: So it’s not down to a library level?
Chris: No. Don’t think of it like, “Oh, on back-end C land and it’s now an optimization flag on my compiler to do things like unroll loops,” or anything like that. It’s not that.
Jon: Okay, thank you.
Chris: Yeah. Another bullet would be just make sure you’re scanning, auditing, and testing your builds. We talked about that CVE scanning and using things like NPM audit. Really leverage what’s there that you can take advantage of in your CICD pipeline. Then, health checks. Again, for readiness and liveness. Look at Docker and the infrastructure supports a very robust ecosystem of health checks there between specifying conditions, how they’re used in your Docker Compose file, and Docker itself, making those health checks through the Docker at the end.
Jon: Alright, excellent. Thanks so much to Bret Fisher for this good talk that we’re able to go over. I learned a lot from it even though I wasn’t present in the audience. Thanks for explaining it to me, Chris.
Chris: Yeah, you bet. It’s a very good talk, a laundry list of very much actionable things to go do. I enjoyed this because there’s at least two or three things here, it’s like, “Yup, we got to go and do that.”
Jon: Right. I think the DockerCon talks that you listen to, there’s one more that we might dip into the DockerCon bucket for and do another episode on. That’ll likely come up next week. I’m about to go to GlueCon in a couple of weeks, too. Shoutout to that conference that’s really fun that I’ve been through for a few years now. That’s should be some interesting things out of that one too to talk about.
Jon: Thank you so much, Chris.
Chris: All right. Thanks, Jon.
Jon: Talk to you next week.
Chris: See you.
Rich: Well dear listener, you made it to the end. We appreciate your time and invite you to continue the conversation with us online. This episode, along with show notes and other valuable resources is available at mobycast.fm/60. If you have any questions or additional insights, we encourage you to leave us a comment there. Thank you and we’ll see you again next week.