The ability to stay focused is the key to achieve your goals in a world full of temptations. Nir Eyal, the author of the bestseller book, Hooked, revealed his way of fighting against distractions in his new book, indistractable.

Why are we so easily distracted? There might be external causes sometimes. However, in most cases, it turns out distraction has internal sources. We are distracted because we want to escape from discomfort. To avoid distraction, we need to solve the problem from inside.

Identify internal triggers and beat them

Next time you feel inclined to distraction, try to record your feelings and what triggers that. That’s how you can identify internal triggers in the first place. Then you can try to avoid the triggers by making tasks more fun.

Plan quality time for yourself before work

To have a plan can prevent you from distraction because you will know what exactly you are striving for. However, scheduling for work is not the best place to start. On the contrary, plan for yourself and your relationships first, and then you will not escape to your hobbies in the middle of work.

Cut down on office distractions and learn to organize

Office distractions such as email notifications are typical external triggers. Let others know that you need to be entirely focused on the task at hand, so they are not supposed to interrupt you. Also, learn to sort your emails more effectively and make sure only a few emails demand your attention every day. Other than emails, there are other forms of distraction in the workplace. Learn to organize them in the least distracting way.

Make use of pacts to prevent distraction

You have to be aware of the fact that the battle between you and distraction is not a one-day fight. Maybe you can try an APP to block your access to distracting. Or find a study buddy to focus together. Imposing fines for missing targets also sounds practical, which has been testified by the author.

Contribute to a functional work culture

Dysfunctional work culture is the beginning of endless distraction, in which employees are overburdened and even required to answer emails after work. Employers should create a platform that enables employees to give feedbacks safely without be worried about getting fired. Step by step, the company can head towards a functional work culture.

Every aspiring entrepreneur should always be aware of the deadly pitfall of building something that nobody wants. That is why the right kind of analytics becomes so necessary. The book Lean Analytics introduces good metrics for start-up founders to navigate through the unknown and assess their success.

Data-driven in the right direction

Data is vital to business. Entrepreneurs need data to convince others that their ideas will work. Sometimes, entrepreneurs tend to overestimate their success but data will not lie. Data helps founders to stay grounded in reality. However, personal judgement of what data to pursue is also important. Don’t be just a slave to numbers.

What are good metrics?

In order to stay data-informed, you need to find some metrics which can provide meaningful data. Good metrics have three characteristics:

  • Comparable: a good metric can be compared to different time periods, groups of consumers and so on
  • Understandable: a good metric is simple and easy to comprehend
  • Ratios: ratios are effective and comparable

Five distinct stages by the Lean Analytics framework

The Lean Analytics framework suggests a start-up will go through five stages:

  • Empathy — identify a need that people have / identity your niche market
  • Stickiness — figure out how to satisfy the need with a product
  • Virality — add features that attracts people
  • Revenue — the business starts to grow and generate revenue
  • Scale — expand or break into new markets

Focus on one metric

To achieve success, founders must focus on one metric that’s most critical. Knowing what is the most important metric prevents you from getting lost in the data world.

What’s the best metric?

There is no best metric in general. In different industries, the best metric differs. For E-commerce companies, the most important metric is revenue per customer. However, for media sites, the best metric is the click-through rates.

When you know where you should go, it is too late to go there; if you always keep your original path, you will miss the road to the future.

Charles Handy makes an analogy as his road to Davy’s Bar. Turn right and go up the hill when there is half a mile to the Davy’s Bar. However, when he realized he was on the wrong way, he arrived at Davy’s Bar already.

The growth curve is usually in an “S” shape, and we call it S-curve or sigmoid curve. To keep the overall growth rate high, you have to develop your second S-curve before it is too late to invest your time and resources.

Intel’s CPU, Netflix’s video streaming, Nintendo’s gaming, Microsoft’s cloud are all excellent examples of the second-curve-driving businesses.

How to find and catch the second curve takes vision and execution. You have to input more information and continuously sort them to identify the best opportunities. And then, once a chance identified, you need a reliable team to fight the battle and figure out whether it really works.

What makes you succeed may not make you succeed again. There is always a limit to growth. The second curve theory helps us reflect on why and how to embrace the change and live a more thriving life.

Requirements

Internet-scale web services deal with high-volume traffic from the whole world. However, one server could only serve a limited amount of requests at the same time. Consequently, there is usually a server farm or a large cluster of servers to undertake the traffic altogether. Here comes the question: how to route them so that each host could evenly receive and process the request?

Since there are many hops and layers of load balancers from the user to the server, specifically speaking, this time our design requirements are

Note: If Service A depends on (or consumes) Service B, then A is downstream service of B, and B is upstream service of A.

Challenges

Why is it hard to balance loads? The answer is that it is hard to collect accurate load distribution stats and act accordingly.

Distributing-by-requests ≠ distributing-by-load

Random and round-robin distribute the traffic by requests. However, the actual load is not per request - some are heavy in CPU or thread utilization, while some are lightweight.

To be more accurate on the load, load balancers have to maintain local states of observed active request number, connection number, or request process latencies for each backend server. And based on them, we can use distribution algorithms like Least-connections, least-time, and Random N choices:

Least-connections: a request is passed to the server with the least number of active connections.

latency-based (least-time): a request is passed to the server with the least average response time and least number of active connections, taking into account weights of servers.

However, these two algorithms work well only with only one load balancer. If there are multiple ones, there might have herd effect. That is to say; all the load balancers notice that one service is momentarily faster, and then all send requests to that service.

Random N choices (where N=2 in most cases / a.k.a Power of Two Choices): pick two at random and chose the better option of the two, avoiding the worse choice.

Distributed environments.

Local LB is unaware of global downstream and upstream states, including

  • upstream service loads
  • upstream service may be super large, and thus it is hard to pick the right subset to cover with the load balancer
  • downstream service loads
  • the processing time of various requests are hard to predict

Solutions

There are three options to collect load the stats accurately and then act accordingly:

  • centralized & dynamic controller
  • distributed but with shared states
  • piggybacking server-side information in response messages or active probing

Dropbox Bandaid team chose the third option because it fits into their existing random N choices approach well.

However, instead of using local states, like the original random N choices do, they use real-time global information from the backend servers via the response headers.

Server utilization: Backend servers are configured with a max capacity and count the on-going requests, and then they have utilization percentage calculated ranging from 0.0 to 1.0.

There are two problems to consider:

  1. Handling HTTP errors: If a server fast fails requests, it attracts more traffic and fails more.
  2. Stats decay: If a server’s load is too high, no requests will be distributed there and hence the server gets stuck. They use a decay function of the inverted sigmoid curve to solve the problem.

Results: requests are more balanced

Concurrency Models

18455 2019-10-16 14:04

  • Single-threaded - Callbacks, Promises, Observables and async/await: vanilla JS
  • threading/multiprocessing, lock-based concurrency
    • protecting critical section vs. performance
  • Communicating Sequential Processes (CSP)
    • Golang or Clojure’s core.async.
    • process/thread passes data through channels.
  • Actor Model (AM): Elixir, Erlang, Scala
    • asynchronous by nature, and have location transparency that spans runtimes and machines - if you have a reference (Akka) or PID (Erlang) of an actor, you can message it via mailboxes.
    • powerful fault tolerance by organizing actors into a supervision hierarchy, and you can handle failures at its exact level of hierarchy.
  • Software Transactional Memory (STM): Clojure, Haskell
    • like MVCC or pure functions: commit / abort / retry

TianPan.co

Startup Engineering
© 2010-2018 Tian
Built with in San Francisco