Random generation of network models in R 0

Random generation of network models in R

Photo by Anastasia Dulgier on Unsplash

I already talked about networks a few times in this blog. In particular, I had this approach to explain spatial segregation in a city or to solve the Guess Who? problem. However, one of the questions is how to generate a good network. Indeed, I aim to study strategy to split a network, but I need first to work with a realistic neural network. I could have downloaded data of a network, but I’d rather study the different models proposed to generate neural networks.

I will explain and generate the three most famous models of neural networks:
– The Erdős-Rényi model;
– The Watts and Strotgatz model (small world model);
– The Barabási-Albert preferential attachment model.

We represent each model with a matrix of acquaintance. The intersection of the column i and the row j is a 1 if and only if the nodes i and the node j know each other. Since we simulate reciprocal neural network (i.e. if i knows j then j knows i), we can work on a triangle matrix and not worry about the lower triangle of our matrices. Here, I use the R function image() to represent these matrices. In red are the 0, in white are the 1.

The Erdős-Rényi model.

This model is certainly the simplest of the three models. Only two parameters are required to compute this model. N is the number of nodes we consider and p, is the probability for every couple of nodes to be linked by an edge.

This model assumes that the existence of a link between two nodes is independent to the other link of the graph. According to Daniel A Spielman, this model has not been created to represent any realistic graph. However, this model has some very interesting properties. The average path is of length log(N) which is relatively short.

Besides, if p < 1, for N great enough, the clustering coefficient converges toward 0 (almost surely). The clustering coefficient for one point, is in simple word the ratio between all the existing edges between the neighbors of this point to all the possible edges of these neighbors.

On this figure the clustering coefficient of A is 1/3, there are 3 possible edges between the neighbors of A (X-Y, Y-Z, Z-X) and only one (Z-Y) is linked.

The Watts and Strotgatz model (small world model).

This model is really interesting, it assumes that you know a certain number of persons (k) and that your are more likely to know your closest neighbors. The algorithm though more complicated than the Erdős-Rényi model’s is simple. We have 3 parameters. The number of the population (N), the number of close neighbors (k) and a probability p. For any variable, for every close neighbor, the probability to be linked with it is (1-p). For every close neighbor not linked with, we choose randomly in the further neighbors the other link.

Because this model generates some conglomerates of people knowing each other, it is really easy to be linked indirectly (and with a very few number of steps) with anyone in the map. This is why we call this kind of model a small world model. This is, in the three we describe here the closest from the realistic social network of friendship.

The Barabási-Albert preferential attachment.

This model is computing with a recursive algorithm. Two parameters are needed, the initial number of nodes (n0) and the total number of node (N). At the beginning, every initial node (the n0 first nodes) knows the other ones, then, we create, one by one the other node. At the creation of a new node, this node is linked randomly to an already existing node. The probability that the new node is linked to a certain node is proportional to the number of edges this node already has. In other word, the more links you have, the more likely new nodes will be linked to you.

This model is really interesting, it is the model for any neural network respecting the idea of “rich get richer”. The more friends one node has, the more likely the new nodes will be friend with him. This kind of model is relevant for internet network. Indeed, the more famous is the website, the more likely this website will be known by other websites. For example, Google is very likely to be connected with many websites, while it is very unlikely that my little and not known blog is connected to many websites.

The code (R) :

# ER model
generateER = function(n = 100, p = 0.5){
map = diag(rep(1, n))
link = rbinom(n*(n-1)/2, 1,p)
t = 1
for(j in 2:n){
for(i in 1:(j-1)){
map[i,j] = link[t]
t = t + 1

# WS model

f = function(j, mat){
return(c(mat[1:j, j], mat[j,(j+1):length(mat[1,])]))
g = function(j, mat){
k = length(mat[1,])
a = matrix(0, nrow = 2, ncol = k)
for(i in 1:(j-1)){
a[1,i] = i
a[2,i] = j
for(i in (j+1):k){
a[1,i] = j
a[2,i] = i
a = a[,-j]
g(1, map)
callDiag = function(j, mat){
return(c(diag(mat[g(j,mat)[1, 1:(length(mat[1,])-1)], g(j,mat)[2, 1:(length(mat[1,])-1)]])))
which(callDiag(4,matrix(runif(20*20),20,20)) <0.1)
generateWS = function(n = 100, k = 4 , p = 0.5){
map = matrix(0,n,n)
down = floor(k/2)
up = ceiling(k/2)
for(j in 1:n){
map[(((j-down):(j+up))%%n)[-(down + 1)],j] = 1
map = map|t(map)*1

for(j in 2:n){
list1 = which(map[(((j-down):(j))%%n),j]==1)
listBusy = which(map[(((j-down):(j))%%n),j]==1)
for(i in 1:(j-1)){
if((j-i<=floor(k/2))|(j-i>= n-1-up)){
map[i,j] = 0
samp = sample(which(callDiag(j, map) == 0), 1)
map[g(j, map)[1, samp], g(j, map)[2, samp]] = 1


# BA model
generateBA = function(n = 100, n0 = 2){
mat = matrix(0, nrow= n, ncol = n)
for(i in 1:n0){
for(j in 1:n0){
if(i != j){
mat[i,j] = 1
mat[j,i] = 1
for(i in n0:n){
list = c()
for(k in 1:(i-1)){
list = c(list, sum(mat[,k]))
link = sample(c(1:(i-1)), size = 1, prob = list)
mat[link,i] = 1
mat[i,link] = 1

# Graphs

Random generation of network models in R was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tesla Stock Plunges 22% in 1 Month as Trump Trade Fallout’s Biggest Loser 0

Tesla Stock Plunges 22% in 1 Month as Trump Trade Fallout’s Biggest Loser

By CCN: In less than one month, the share price of the Tesla stock has dropped from $273 to $211 by more than 22.7 percent following the fall out of recent trade discussions. While Tesla bulls remain optimistic about the long-term prospect of the firm, some strategists are concerned about the cash flow of the business and the performance of the company in key markets such as China. Intensifying criticisms from short sellers, a report of an alleged autopilot system malfunction, a serious cash flow issue, and the ongoing trade dispute between the U.S. and China are said to have

The post Tesla Stock Plunges 22% in 1 Month as Trump Trade Fallout’s Biggest Loser appeared first on CCN

Bitcoin: Sovereignty through mathematics CHAPTER ONE 0

Bitcoin: Sovereignty through mathematics CHAPTER ONE

Sovereignty through mathematics CHAPTER ONE


All human interaction can be defined as trade. Yes, all human interaction. Every time a human being interacts with another, an exchange takes place. In every conversation we have, we exchange information with each other. Even the most trivial information is of some value to the other person. If information didn’t have any value to us, we wouldn’t talk to each other. Either what the other person says is valuable to us or we find it valuable to give information to them. Oftentimes both. At the core of all human interaction that isn’t violent, is that both parties perceive that they gain some value from it, lest the interaction wouldn’t have taken place at all. Civilizations begin this way. Two people finding it valuable to interact with each other. That’s all it takes. So what constitutes value? What we find valuable is entirely subjective. A comforting hug for example, probably has a different value to a two-year old than it has to a withered army general. Even the most basic action, such as breathing, encapsulates the whole value spectrum. We tend to forget that even a single breath of air can be of immense value to us under the right circumstances. A single breath is worth more than anything on the planet to a desperate free-diver trapped under ice, while worth nothing to a person with a deathwish in clean forest air on a sunny summer day. Value is derived from supply and demand, and demand is always subjective. Supply is not. Since all of our lives are limited by time, time is the ultimate example of a scarce, tradeable resource. We all sell our time. We sell it to others and we sell it to ourselves. Everyone sells their time, either through a product that took them a certain time to produce, or as a service, and services always take time. If you’re an employee on a steady payroll, you typically sell eight hours of your day per day, to your employer. If you’re doing something you truly love to do, that eight hour day still belongs to you in a way, since you’re doing what you’d probably be doing anyway, if you had been forced to do it for free. Sometimes, we sacrifice time in order to acquire something in the future. An education for instance, gives no immediate reward but can lead to a better paying, and more satisfying job in the future. An investment is basically our future self trading time with our present self at a discount. Once again, every human interaction viewed as trade. It’s rooted in physics. For every action there’s an equally big reaction. Trade is at the very core of what we are, and the tools we use to conduct trade matter a lot to the outcome of each transaction. Money is our primary tool for expressing value to each other and if the creation of money is somewhat corrupt or unethical, that rot spreads down through society, from top to bottom. Shit flows downhill, as the expression goes. So what is money, or rather, what ought money be? In order for two persons to interact when a mutual coincidence of needs is absent, a medium of exchange is needed in order to execute a transaction. A mutual coincidence of needs might be “You need my three goats and I need your cow”, or even “both of us need a hug”. In the absence of a physical good or service suitable for a specific transaction, money can fulfill the role of a medium of exchange. What most people fail to realize is that the value of money, just as the value of everything else, is entirely subjective. You don’t have to spend it. The problem with every incarnation of money that mankind has ever tried is that its value always gets diluted over time due to inflation in various forms. Inflation makes traditional money a bad store of value, and money needs to be a good store of value in order to be a good investment, or in other words, a good substitute for your time and effort over time. Bitcoin tries to solve this by introducing absolute scarcity, a concept that mankind has never encountered before, to the world. To comprehend what such a discovery means for the future, one needs to understand the fundamentals of what value is, and that we assign a certain value to everything we encounter in life, whether we admit it or not. In short, we assign value to everything we do, value is derived from supply and demand, supply is objective and demand is subjective.

Free trade emerges out of human interaction naturally and it is not an idea that was forced upon us at any specific point in time. The idea that markets should be regulated and governed on the other hand, was. Free trade is just the absence of forceful interference in an interaction between two humans by a third party. There’s nothing intrinsically wrong or immoral about an exchange of a good or service. Every objection to this is a byproduct to the current global narrative. A narrative that tells us that the world is divided into different nations and that people in these nations operate under various sets of laws, depending on what jurisdiction they find themselves in. All of these ideas are man made. No species except humans does this to themselves. Animals do trade, but they don’t do politics. Bitcoin, and the idea of truly sound absolutely scarce money, inevitably makes you question human societal structures in general, and the nature of money in particular. Once you realize that this Pandora’s box of an idea can’t be closed again by anyone, everything gets put into perspective. Once you realize that it is now possible for anyone with a decently sized brain to store any amount of wealth in that brain, or to beam wealth anonymously to any other brain in the world without anyone ever knowing, everything you were ever told about human society gets turned on its head. Everything you thought you knew about taxes, social class, capitalism, socialism, economics or even democracy falls apart like a house of cards in a hurricane. It is in fact impossible to comprehend the impact Bitcoin will have on the planet without also understanding basic Austrian economics and what the libertarian worldview stems from.

Imagine growing up in the Amish community. Up until your sixteenth birthday you’re purposely totally shielded off from the outside world. Information about how the world really works is very limited to you since internet access or even TVs and radios are forbidden within the community. Well, from a certain perspective, we’re all Amish. How money really works is never emphasized enough through traditional media or public educational institutions. Most people believe that the monetary system is somehow sound and fair when there’s overwhelming evidence to the contrary, all over the globe. Ask yourself, do you remember being taught about the origins of money in school? Me neither. I don’t believe that there’s some great, global conspiracy behind the fact that the ethics of money creation isn’t a school subject, but rather that plain old ignorance is to blame for the lack of such a subject primarily. As soon as the math-skill threshold is high enough, people seem to stop caring about numbers. The difference between a million and a billion seems lost on a depressingly large part of the world’s population. In the chapters ahead we’ll explore the pitfalls of central banking, how money pops into existence and how inflation keeps us all on a leech.

Paperback book version available here: https://www.amazon.com/dp/1090109911

Bitcoin: Sovereignty through mathematics CHAPTER ONE was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Absolute imports with Create React App 0

Absolute imports with Create React App

Plus ESLint and WebStorm config

With the release of Create React App 3, we now have the ability to use absolute import paths, without ejecting.


If you’re reading this you probably don’t need me to tell you why this is a good thing. I’m going to anyway, though.

  • It’s easier to type out the paths, no more ../../../hell.
  • You can copy/paste code including imports into other files and not have to fiddle with the import paths.
  • You can move a file without having to update its input paths (if you IDE doesn’t do that for you anyway).
  • It’s neat.
That’s an absolute path

As explained in the docs, you start by creating a jsconfig.json file in your root with these characters and symbols in it:


That’s great, now you can take something like this:


And make those imports prettier.


Unfortunately, this is where the docs stop. But you might not be done just yet.

WebStorm config

If you’re a WebStorm/IntelliJ user, you’re going to hear some complaints:

WebStorm assumes absolute paths are in node_modules (as per the Node.js rules), so we must tell it that we’re being fancy and using absolute imports.

First up, mark the src directory as a Resources Root.

Love a menu with 31 things in it

Now go to Settings > Editor > Code Style > JavaScript, go to the Imports tab and tick Use paths relative to the project, resource or sources roots.

So now Webstorm understands where those absolute paths are pointing. This means no warnings, and jump-to-source/autocomplete will work. It also means absolute paths will be used by the auto-import mechanism.

So if I have this file:


And I paste this code into it:


WebStorm will know I need <Button> and STRINGS and LINKS and insert the appropriate imports with the absolute paths.


Unfortunately it doesn’t sort them the way I want to (npm packages first, relative imports last). Maybe this is possible and I just can’t work out how.

But still, I’d rather have to re-order imports than type them out like a Denisovan.

VS Code — no config required

VS Code understands jsconfig.json files out of the box, so ‘jump to source’ and Intellisense will work just fine with absolute imports.

And it doesn’t seem to care if you have an import path pointing to a file that doesn’t exist, so no config required there either.

(Side note: as of May 2019, WebStorm is still better than VS Code IMO. It has vastly superior git tools — particularly for conflict resolution — and is better for refactoring. But VS Code is catching up fast, and opens in a tenth the time.)

Capital letters by convention

Absolute paths have been possible for a long time with Webpack, and it has become convention to use PascalCase for your aliased import roots (this is how it’s done in the examples from the Webpack docs).

This is smart, and I would recommend doing the same in your codebase by renaming all your top-level directories to PascalCase.

When things like Components and Utils start with capital letters, it will be plain to see which imports are npm packages and which are your own source code. You’ll also never have a clash with an npm package.

For similar reasons, avoid files stored in the root of src that you’re going to be importing. For example, if you had src/constants.js, you’d have to do import constants from 'constants'; which is just odd.


CRA has a very minimal set of rules in their ESLint setup, and some strong opinions about why this is the case. If you’re clever like me, you’ll disregard the advice of Facebook (what do they know about React anyway?) and use something like Airbnb’s ESLint config.

If you do, you will soon learn that Airbnb use eslint-plugin-import, which checks for undefined imports, and will give you errors like so:

You can fix this with a settings prop in your ESLint config to tell it that your paths might be relative to src:


Note that you don’t need to install any npm package for this, that settings chunk is enough.

Side note, since we’re talking about ESLint: Do you use Prettier? You should. I think some people are drawn in by the promises made by the name, but turned off when they realise that a more fitting name would have been ‘Uglier’.

Yes, sometimes it behaves like a madman tearing up a Monet to mail it, maniacally grunting: Must. Fit. In. Envelope. But once you accept that it’s going to make your pretty code look pretty gross in places, you can build a bridge, get over it, and move on to reap the rewards: not discussing code style with other developers.

Restoring clarity

Absolute imports are a little bit of magic that might confuddle a new developer for a moment, so I suggest putting a few lines in your readme about what’s going on, including notes about IDE setup. You might even link to this post, and I totally promise I’m not going to change the content a year from now to be nothing but pictures of ducks, sorted by age.

It’s also worth defining when a developer should still use relative imports. I think it’s reasonable to say that sibling files should be imported with a relative path, but not anything where you need to go up the tree. And I’d suggest using relative imports for closely related child components. If you have a <Dropdown> with a <DropdownItem> child component, it’s probably overkill to use a full absolute path to import DropdownItem.

Hey that was a pretty short post!

Have a spectacular day my internet friends.

Absolute imports with Create React App was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Building Webiny — A Serverless CMS 0

Building Webiny — A Serverless CMS

Building Webiny — A Serverless CMS

Challenges and Lessons Learned Over the Last 12 Months

Webiny — Serverless CMS

First, we had metal, then it all went virtual and, finally, we moved to containers. Now, a new era of “servers” is emerging (if we can even call them that); one that is born in the cloud and designed for the cloud.

Evolution of serverless — Jason McGee

Werner Vogels, CTO of Amazon, said:

“Serverless is the future of development”

— and I couldn’t agree more.

However, not everything is peachy. The serverless environment is young(ish), and not all systems we use today are optimized for it, especially the content management systems. Let me explain.

Revolution over evolution

We are still chasing faster horses rather than thinking about how to teleport.

If the previous sentence confused you, let me give you an analogy.

Upgrading your Wordpress version 4 to version 5 is evolution; you have just been given a faster horse. This horse can jump higher, go for longer, and can even walk backward — but it’s still a horse. It’s still the same stack that was invented 15 years ago (initial release: May 27, 2003).

In the meantime, bright new minds have invented the serverless technology, new API standards (like GraphQL), cool and new ways to build and construct the UI with libraries (like React). Today, we even have a serverless database. The non-existence of all these technologies was how the previous CMSs were designed, and those limitations are still baked into them.

The path of revolution requires us to build new things with technologies (and mindsets) that are more powerful, faster, safer, and much more scalable. This is a story about one of those trips.

Why build a serverless CMS?

The answer is simple — I had an itch 🙂

Joking aside, if you are looking for a CMS built with technologies like React, Node, and GraphQL, and is also designed for the serverless environment — well, you’re in luck!

As of now, there is only one option that’s available today — Webiny.


Webiny is a CMS for the serverless era. It’s open-source, licensed under MIT, and it also features a hosted version, where you can get your very own serverless environment in just a few mouse clicks.

Webiny has numerous cool features and is aimed primarily at developers. You can learn more about that on the aforementioned website. Now, let’s get back on track with the original title of this post.

Challenges along the way

The challenges we faced can be categorized into three different groups:

  1. CMS related
  2. Architecture related
  3. Business/finance related

We will discuss each category in more detail and then share some useful tips — especially about the business/financial side. Now, let’s dive deeper.

CMS challenges

A CMS is a rather complex set of features that need to work together. Here are some examples.

Page builder

You can’t just create a form with an input field for the title and a CKEditor for the page content and call it CMS. That’s not nearly enough. You need a full-featured page builder — and we’ve built one!

But boy, the effort required to create one that actually works and is optimized is sure hard to describe.

Webiny Page Builder

The first main challenge was managing the data inside the page builder. The interface is complex — each page item has over 20 different properties that you can adjust. In a single page you can easily have over 100 elements, bringing the total amount of props that you need to manage, store, and retrieve up to thousands.

We used Redux to control all that, and it’s not just Redux as a data-layer that saves you loads of trouble, but the debugger that comes with it as well. This wouldn’t have been possible without it. You can move up or down the stack of your events, see how the whole data structure looks in real time, and many other things without which this would’ve been way harder.

Redux flow

Rendering performance

Our CMS has over 100 different moving parts — some hold state, some do major state updates every time you make a change.

Propagating that through the UI can sometimes cause the interface to lag and just not feel “fast” enough. In the beginning, everything was nice and smooth.

You develop one element, you test it, you move on to the next. Everything seems to be working until you finally begin building real pages with tons of elements. Slowly but surely, everything becomes slower and slower.

An obvious first thought is the React Developer Tools. It has the “Highlight Updates” option, which will help you identify the re-rendered elements.

React Developer Tools

Unfortunately, this will only help you with a very simple UI hierarchy. In our case, this barely scratched the surface.

So, we had to turn to the Chrome Dev Tools, the Performance Snapshots in particular. This tool is priceless when debugging rendering performance in your React app. The recorded snapshots let you see exactly how much time was spent to render each particular element in your app hierarchy (don’t forget to open the Timings section). You can, then, go directly to the specific component’s code and start optimizing things:

Chrome Dev Tools — Performance Snapshot

This is still a very manual process since you have to think about the output and why certain components get re-rendered, but you can quickly find different problems in your app and in some cases solving one problem can fix rendering performance in many places (shouldComponentUpdate and Redux’s connect will be your best friends to get the job done).

Very daunting at the beginning, this process will make you happy in the end.

CSS libraries

Here is a helpful tip which is not very obvious straight away:

If you are using a library for CSS, like Emotion, make sure you do not update props too often on components that control the CSS, as that will create and insert a new style DOM element on each update.

A typical example is a tool to resize elements: the size of the element must under no circumstances be an Emotion prop — it has to be a simple React element style value. Otherwise, you will bombard your DOM with new style elements while resizing is taking place (that’s just how Emotion works).

User interface

Building a UI is a story of challenges enough to fill a whole book, if not several ones. Luckily, when we started, Google had just launched Material Design, and there were a few react libraries popping up that had those components ready to be used. The one we used is called RMWC, and, together with the Design section on the material.io website that teaches you how to use their components in the right way, we managed to create a pretty decent UI for our CMS.


Themes and plugins

What is a CMS if you can’t build your own theme, or customize it with your own plugins and add-ons?

We wanted our theme system to be super simple. Since the page builder is actually the place where you build the content and generate the HTML, we managed to create a theme library where all you have is a small JSON config and everything else is done through (S)CSS. You can learn more about our theme setup on our documentation website.

layouts: [
// defines a list of layouts
fonts: {
// defines font faces
colors: {
// defines a list of default colors
elements: {
// defines element settings
typography: {
// defines typography styles

As for the plugin system, it’s hard to find any good advice or best practices on how to make a plugin system. So, what we actually did was we made the whole CMS as a set of plugins. Every button, menu item, and form element is actually a plugin.

This made us feel comfortable in the scalability and the possibilities of what you can do with plugins. On the other hand, this made our code very modular and decoupled. Finally, this has a great effect on developer experience (DX) since there is no difference in how the code is written for the “core” of the system versus how the code will be written by other developers when building their own plugins.

Architectural challenges

Architecture for a serverless application is rather simple, right? You have an API Gateway, some Lambdas, a database on the other end, an S3 bucket with static files.

But, here’s a question for you — what about a multi-tenant serverless application? Basically where each user can have its own domain name, its own SSL certificate, static file hosting, and a database.

The hosted version of Webiny is exactly that.

And, I can tell you that, today, there isn’t a cloud provider that will allow you to have 10.000 different SSL certificates on an API gateway or serve static assets from the same bucket using multiple domain names. CloudFront and similar CDNs have hard limits on the number of SSL certificates and domain names that they can serve. We talked to several cloud providers and they just don’t support that.

So, although serverless is great — it’s got its limitations. Or does it?! 😉

We took this problem as a challenge and, to get around it, we used an OpenResty Nginx fork and wrote our own API gateway for the AWS Lambda, designed our own SSL certificate management system in the front, created a proxy to the S3 buckets, and also added a bunch of usage and performance monitoring agents, et voila — we can now support as many serverless tenants as we want.

OpenResty is great; it just takes a bit of practice with coding in Lua.


You might argue that this is not a true Serverless CMS since it uses a “server” in the form of a proxy to make it work.

Technically, the proxy is not required; it’s only needed for the hosted environment with multiple tenants. But, also, for the sake of argument, if you use a CDN in front of your serverless page, like Cloudflare, those are technically OpenResty reverse proxies.

Don’t get me wrong, this definitely is an area where big cloud providers like AWS, Google, and Azure need to improve since those platforms, today, don’t support a multi-tenant serverless architecture.

Business challenges

By this point, you’ve hopefully gotten the picture of the effort required to build a serverless CMS, but the story doesn’t really end there — I’ve got one more thing to share.

Webiny is open source, but it’s taken a lot of time, dedication, and money to create Webiny. When we started the project, we knew doing this on the side was never an option; we would most likely fail in our attempt. So, the only way to build something as large and complex as this required financial support. But, no investor will give you money if you go and approach him with an elevator pitch to build a CMS, which has 0 users and no revenue — it would just never be convincing enough to raise investment.

Also, creating a CMS 10 years ago was way easier because the bar to enter that market was much lower, but that’s no longer the case. Several people told us — “Hey, just build an MVP.”

Well, you go and try to build an MVP for, let’s say, an electric car, and let me know which things you will leave out that your competition already has today.

Let me help you with answering that: you won’t leave out a single thing! Your car will have everything the competition has and then some — and that’s when you’ll launch! Otherwise, nobody will buy it.

No matter what you do in life, if you want to be great at something, you need to reach the level of the “current greatness” and then be slightly better than that.

So, how did we solve our financial problem?

Well, the team originally started as a small web agency. We earned some money by doing standard web-agency stuff and then, at one point, we decided to stop accepting new work, all the money left on the bank account was used as an investment to ensure salaries for the team so that we can be fully committed to this project. Had it been any other way, Webiny would never have seen the light of day.

One small tip in this area.

AWS has the Activate program, you can apply it and get $1000 in AWS credits, which is great for covering the cost of the infrastructure while developing.

Another thing to explore is things like ProductHunt Ship. If you subscribe for the annual membership ($600 — $1500), you can get between $5000 and $7500, also in AWS credits, alongside other benefits. Note that if you apply for the AWS activate, the initial credit amount might be deducted from the credit amount you get from ProductHunt Ship.

AWS Activate

What’s next

If you got this far … I admire your focus. I blame the length of this post on my long plane flight.

Anyhow, hope you’ve enjoyed the read. I would really appreciate it if you’d give Webiny a spin, and let me know what you think of it. You can reach out to me via twitter @SvenAlHamad.


Building Webiny — A Serverless CMS was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Constructing a Sigmoid Perceptron in Python 0

Constructing a Sigmoid Perceptron in Python

In this article, our objective is to visualize its training with the help of a sample dataset.

Intuition of Model

Activation Function

Let’s first understand the basics of the Sigmoid model before we construct it. As the name suggests the model revolves around the sigmoid formula, which can be represented as:

Our sigmoid formula comprises of the following parameters:

  • X: the features of the dataset
  • W: the weight vector corresponding to X
  • B: the bias
Sigmoid Curve 2D
Sigmoid Curve 3D

The property of the sigmoid curve ( value ranging between 0 and 1 ) makes it beneficial for primary regression/classification problems.

Loss Function

Since we will be dealing with real values for this visualization, we will be using Mean Square Error as our loss function.


The gradient for gradient of “w” can be calculated as:


Let us start with importing the libraries we need. Here is some (in-depth) documentation for animation: Animation, Simple animation tutorial


Now let's list the components that our SigmoidNeuron class will comprise of

  • function to calculate w*x+ b
  • function to apply sigmoid function
  • function to predict output for a provided X dataframe
  • function to return gradient values for “w” and “b”
  • function to fit for the provided dataset


Some important points regarding the class:

  • Our objective is a visualization of the training, and change for each input is to be recorded. Hence no iteration through the dataset is added in the “fit” function.
  • We can initialize “w” and “b” to any random value. Here we have initialized to a specific float as it provides us with the most unfit scenario for the model.
Unfit Scenario

Our next step will be to create a sample dataset, create a function that will produce contour plots and create a loop to provide a dataset to the model over time.


Some resources : Mycmap , meshgrid , subplots , contourf,

Now finally we receive 120 plots that depict the training.

The first plot

The last plot

All 120 plots can be found here.

Our last step is to create an animation for the training.


Final result


3d plot visualization


Thank you for reading this article. The complete code can be found here.

Constructing a Sigmoid Perceptron in Python was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

I Underestimated Just How Many Subpoenas I Would Get 0

I Underestimated Just How Many Subpoenas I Would Get

The number of subpoenas early crypto companies got from misinformed government agencies was “staggering,” says Bloq’s Steve Beauregard.

Elon Musk Ports Epic’s Unreal Engine to Install Fornite in Your Tesla 0

Elon Musk Ports Epic’s Unreal Engine to Install Fornite in Your Tesla

By CCN: Worried Telsa CEO Elon Musk wasn’t packing his cars with enough feature? The tech pioneer wants you to be able to play top video game titles in his vehicles. Tesla is definitely porting over Unity and Epic’s Unreal Engine, that Fortnite and Rocket League, run on. Musk also tweeted Microsoft and Roblox to see if they want to be involved too. Also porting Unreal Engine — Elon Musk (@elonmusk) May 19, 2019 Elon Musk Wants Fortnite, Rocket League, Minecraft and Robolox In Your Tesla In a chain of tweets where the tech billionaire, dropped the bombshell that alongside

The post Elon Musk Ports Epic’s Unreal Engine to Install Fornite in Your Tesla appeared first on CCN

Bitcoin History Part 13: The First Mining Pool 0

Bitcoin History Part 13: The First Mining Pool

The notion that anyone could solo mine bitcoin – on a CPU no less – seems positively quaint today. But in 2010, this method wasn’t just possible – it was the norm. With an exponentially lower hashrate, less competition and a 50 bitcoin block reward, there was enough pie for everyone to get a bite. But some miners didn’t just want a bite – they wanted a whole slice, and to achieve that, they decided to join forces and pool their hashpower. And thus “cooperative mining” was born.

Also read: Bitcoin History Part 12: When No One Wanted Your BTC

Bitcoin’s Pool Birth

If Bitcoin is a revolution, it is one which contains a series of micro-revolutions. In late 2010, Bitcoin was to experience its first industrial revolution when a few miners agreed to combine their hashing power. In that moment, history was made, and in the years to come, so was a lot of money. Not everyone was enamored with the idea, though, when it was first floated by Slush on November 27, 2010. “Once people started to use GPU enabled computers for mining, mining became very hard for other people,” he explained. “I’m on bitcoin for few weeks and didn’t find block yet (I’m mining on three CPUs). When many people have slow CPUs and they mining separately, each of them compete among themselves AND against rich GPU bastards ;-).” He continued:

I have an idea: Join poor CPU miners to one cluster and increase their chance to find a block!

Slush added: “Advantages? When you have poor standalone computer, you need to wait many weeks or even months for finding full 50BTC reward. When you join cluster like this, you will constantly receive small amount of bitcoins every day or week (depends on full cluster performance) … I think it is extremely important for bitcoin economy to diversify mining across whole network and not leave mining on few lucky guys with fast GPUs.”

Skeptics Were Skeptical But the Believers Believed

Reaction to Slush’s bold proposal was mixed. Some bought into the idea, while others were distinctly unimpressed. “Isn’t cooperative mining a form of communism?” responded one Bitcointalk user. “I think it’s useless and much harder to do than one might think.” “This is fundamentally flawed,” snapped another pooled mining opponent.

Slush remained undaunted, though, and within three weeks of his proposal, “cooperative mining,” as it was then known, began. Slush Pool was compatible with Jeff Garzik’s bitcoin CPU miner at launch as well as a couple of early GPU miners. “There is already ~600000khash/s [600 MH/s] of power and more will come tomorrow,” proclaimed Slush. He wasn’t wrong.

Today, Slush’s cooperative mining Bitcointalk thread has grown to 1,148 pages, and Slush Pool, which currently captures 9.3% of the BTC network hashrate, has also grown spectacularly. Since 2010, it’s mined over 1 million BTC and now boasts 8,500 miners and a hash rate of more than 5 EH/s – an increase of 8.4 billion X in nine years.

Bitcoin History is a multipart series from news.Bitcoin.com charting pivotal moments in the evolution of the world’s first and finest cryptocurrency. Read part 12 here.

Images courtesy of Shutterstock and Blockchain.com

Did you know you can earn BTC and BCH through Bitcoin Mining? If you already own hardware, connect it to our powerful Bitcoin mining pool. If not, you can easily get started through one of our flexible Bitcoin cloud mining contracts.

The post Bitcoin History Part 13: The First Mining Pool appeared first on Bitcoin News.