Skip to main content

3 Dimensions to Improve Developer Experience

· 4 min read

In a study by GetDX, Microsoft Research and the University of Victoria in Canada, 25 factors were identified that affect the software development experience, and it was found that the productivity of software engineers is mainly influenced by three dimensions: feedback loops, cognitive load, and flow state.

Feedback LoopsCognitive LoadFlow State
People

Satisfaction with automated test speed and results



Satisfaction with time it takes to validate a local change



Satistaction with time it takes to deploy a change to production

Perception of codebase complexity



Ease of debugging production systems



Ease of understanding documentation

Subjective perception of staying focused and avoiding distractions



Satisfaction with task or project goal clarity



Perception of interruptions during on-call

Process

Time required to generate CI results



Code review turnaround time



Deployment lead time (time required to release changes to production)

Time required to get answers to technical questions



Manual steps required for deploying changes



Frequency of documentation improvements

Number of time blocks without meetings or interruptions



Frequency of unplanned tasks or requests



Frequency of incidents requiring team attention

Goals

  • Perception of ease in delivering software
  • Employee engagement or satisfaction
  • Perception of productivity

1. Feedback Loops

Feedback loops play a vital role in software development by optimizing the value stream and reducing delays in software delivery. The faster developers receive feedback, the quicker they can make necessary adjustments and course corrections. Research indicates that frequent deployment and shorter lead times can double the likelihood of meeting performance goals.

To improve DevEx, organizations must focus on shortening feedback loops. Slow feedback not only interrupts the development process but also leads to frustration and delays. Identifying areas where tools can be optimized or human processes improved is essential for enhancing the feedback loop process.

2. Cognitive Load

Cognitive load refers to the mental processing required by a developer to perform a task. As the number of tools and technologies grows, developers face an increasing cognitive load, which can sometimes hamper their ability to deliver value.

High cognitive load can arise due to issues such as poorly documented code or complex development processes. To improve DevEx, organizations should eliminate unnecessary hurdles in the development process. This includes emphasizing organized code and documentation, as well as providing easy-to-use, self-service tools that facilitate a smoother workflow.

3. Flow State

Flow state is a mental state characterized by full immersion, energized focus, and enjoyment in an activity. Developers often describe this state as "getting into the flow" or "being in the zone." Achieving a flow state leads to higher productivity, innovation, and employee development.

Studies have shown that developers who enjoy their work and frequently experience the flow state perform better and produce higher-quality products. However, delays and interruptions can hinder developers from reaching this productive state.

To enhance DevEx, organizations should focus on creating optimal conditions for the flow state. This includes minimizing disruptions by clustering meetings, avoiding unplanned work, and batching help requests. Additionally, fostering a positive team culture that gives developers autonomy and opportunities to work on fulfilling challenges is crucial for facilitating flow state. Leaders should promote environments conducive to these conditions.

Conclusion

By focusing on the three core dimensions of DevEx - feedback loops, cognitive load, and flow state - organizations can better understand and improve developer productivity. By optimizing these areas, teams can experience significant improvements in their output, ultimately leading to more successful delivery of software.

Quick Intro to Optimism Architecture

· 4 min read

What is Optimism?

Optimism is an EVM equivalent, optimistic rollup protocol designed to scale Ethereum.

  • Scaling Ethereum means increasing the number of useful transactions the Ethereum network can process.
  • Optimistic rollup is a layer 2 scalability technique which increases the computation & storage capacity of Ethereum without sacrificing security or decentralization.
  • EVM Equivalence is complete compliance with the state transition function described in the Ethereum yellow paper, the formal definition of the protocol.

Optimistic rollup works by bundling multiple transactions into a single transaction, which is then verified by a smart contract on the Ethereum network. This process is called "rolling up" because the individual transactions are combined into a larger transaction that is submitted to the Ethereum network. The term "optimistic" refers to the fact that the system assumes that transactions are valid unless proven otherwise, which allows for faster and more efficient processing of transactions.

Overall Architecture

Optimism Architecture

op-node + op-geth

The rollup node can run either in validator or sequencer mode:

  1. validator (aka verifier): Similar to running an Ethereum node, it simulates L2 transactions locally, without rate limiting. It also lets the validator verify the work of the sequencer, by re-deriving output roots and comparing them against those submitted by the sequencer. In case of a mismatch, the validator can perform a fault proof.
  2. sequencer: The sequencer is a priviledged actor, which receives L2 transactions from L2 users, creates L2 blocks using them, which it then submits to data availability provider (via a batcher). It also submits output roots to L1. There is only one sequencer in the entire stack for now, and it's where people critisize that OP stack is not decenralized.

op-batcher

The batch submitter, also referred to as the batcher, is the entity submitting the L2 sequencer data to L1, to make it available for verifiers.

op-proposer

Proposer generates and submitting L2 Output checkpoints to the L2 output oracle contract on Ethereum. After finalization period has passed, this data enables withdrawals.

Both batcher and proposer submit states to L1. Why are they separated?

Batcher collect and submit tx data into L1 with a batch, while proposer submits the commitments (output roots) to the L2's state, which finalizes the view of L2 account states. They are decoupled so that they can work in parallel for efficiency.

contracts-bedrock

Various contracts for L2 to interact with the L1:

  • OptimismPortal: A feed of L2 transactions which originated as smart contract calls in the L1 state.
  • Batch inbox: An L1 address to which the Batch Submitter submits transaction batches.
  • L2 output oracle: A smart contract that stores L2 output roots for use with withdrawals and fault proofs.

Optimism components

How to deposit?

How to withdraw?

Feedback to Optimism's Documentation

Understanding the OP stack can be challenging due to a number of factors. One such factor is the numerous components that are referred to multiple times with slightly different names in code and documentation. For example, the terms "op-batcher" and "batch-submitter" / "verifiers" and "validators" may be used interchangeably, leading to confusion and difficulty in understanding the exact function of each component.

Another challenge in understanding the OP stack is the evolving architecture, which may result in some design elements becoming deprecated over time. Unfortunately, the documentation may not always be updated to reflect these changes. This can lead to further confusion and difficulty in understanding the system, as users may be working with outdated or inaccurate information.

To overcome these challenges, it is important to carefully review all available documentation, to keep concepts consistently across places, and to stay up-to-date with any changes or updates to the OP stack. This may require additional research and collaboration with other users or developers, but it is essential in order to fully understand and effectively utilize this complex system.

Web3 payment protocols

· 3 min read

Streaming or Recurring Payments

Token steaming means sending recurring payments in real time, like water flowing into its target. There are two kinds of payment innovations:

  • Payout, or payer-side innovation: Business owners use it for payroll, subscription, token vesting, corporate treasury, etc. The customer is mostly on the sender side, optimizing how to send out salaries or token equities to your employees securely, cost-effectively, and automatically, like "Workday + Carta + Brex for crypto".
  • Accept payment, or payee-side innovation: Merchants use it to accept payments and allow customers to checkout, like "Stripe for crypto".
ProjectsBlockchainsPayoutAccept PaymentsDifferentiation
SablierEVMprotocol for real-time finance, protocol + app
SuperfluidEVMstream money every second, protocol + app
Roke.toNEARstream money, protocol + app
ZebecSolanamultisig treasury management and streaming payments
StreamflowSolanatoken distribution platform, token vesting and payroll
MeanFiSolanamanage Your Treasury With
Real-Time Finance
calamus.financeMulti-chainreal-time payment and token vesting
llamapayEVMautomate transactions and stream them by the second. salary, vesting, payments.
SuberraEVMaccept crypto for commerce, one-time payments or recurring subscriptions
LoopCryptoEthereum, Polygonpayment links, receipts and reminders, dashboard, web hooks
diagonal.financeEVMnon-custodial - Multiple models fixed, seat, usage-based, or Superfluid streaming
radom.networkNEAR, Aurorapay web2 services with crypto
spritz.financeEVMpay bills with crypto
cask.fiEVMnon-custodial protocol for auto payment
DataMyntMulti-chainfor business, deposit, settlement, payment, invoice
OrbitalMulti-chainweb2 + web3 corporate financial services
Coinbase commerceMulti-chainmerchants accept payments with custodial and non-custodial wallets and allow customers to checkout
wink.financeMulti-chainsimplifies payments and expense management, multisig

Account Abstraction

As of the end of 2022, the most prominent web3 payment protocol is probably EIP-86/EIP-4337 for Account Abstraction. It uses smart-contract wallets to decouple private key ownership from asset account ownership. The protocol is still a work in progress on Ethereum, but Visa has implemented auto payments for self-custodial wallets on Starkware in its internal hackathon.

2023 software demand contraction

· One min read

There will be three major sources of the slowdown

  1. less sign-ups: new businesses are going to dry up
  2. more churn: logo churn is going to be higher
  3. less ARPU: seat contraction. Tailwind of enterprise growth in the industry is gone

Enterprise Authorization Services 2022

· 6 min read

Authorization determines whether an individual or system can access a particular resource. And this process is a typical scenario that could be automated with software. We will review Google's Zanzibar, Zanzibar-inspired solutions, and other AuthZ services on the market.

Zanzibar: Google's Consistent, Global Authorization System

  • Google's = battle-tested with Google products, 20 million permissions check per second, p95 < 10ms, 99.999% availability
  • Consistent = ensure that authorization checks are based on ACL data no older than a client-specified change
  • Global = geographically distributed data centers and distributes load across thousands of servers around the world.
  • Authorization = general-purpose authorization

In Zanzibar's context, we can express the AuthZ question in this way:

isAuthorized(user, relation, object) = does the user have relation to object?

It's called relationship-based access control (==ReBAC==). Clients could build ABAC and RBAC on top of ReBAC. Unfortunately, Zanzibar is not open-sourced nor purchasable as a out-of-box service.

Zanzibar Architecture

Zanzibar Architectecture

Why is Zanzibar scalable?

  • Use Spanner as the database
  • Leopard indexing system
    • flatten group-to-group paths like a reachability problem in a graph
    • store index tuples as ordered lists of integers in a structure, such as a skip list, to achieve efficient union and intersections among sets.
    • async dataflow client > aclserver > changelog > Leopard indexing system
  • How to maintain external consistency? Zookie protocol - Clients check permissions with a timestamp-based token.

Auth0 Fine-Grained Authorization (FGA)

Auth0 FGA is an open-source implementation of Google Zanzibar. Check the interactive tutorial at https://zanzibar.academy/.

For enterprise developers in the context of microservices, how to use the managed solution of FGA?

How to use FGA?

  1. Go to the FGA dashboard to define the authorization model in DSL and relation tuples, and finally, add authorization assertions like automated tests (this is great!).
  2. Developers go back to their services and call the FGA wrapper's check endpoint

Unfortunately, I don't see changelog audits and version control to rollback in case developers break things in the FGA dashboard, probably because FGA is still a work in progress.

OSO

With Oso, you can:

  • Model: Set up common permissions patterns like role-based access control (RBAC) and relationships using Oso's built-in primitives. Extend them however you need with Oso's declarative policy language, Polar (DSL).
  • Filter: Go beyond yes/no authorization questions. Implement authorization over collections too - e.g., "Show me only the records that Juno can see."
  • Test: Write unit tests over your authorization logic now that you have a single interface for it. Use the Oso debugger or REPL to track down unexpected behavior.

Ory Keto

Keto is an open Source (Go) implementation of Zanzibar. Ships gRPC, REST APIs, newSQL, and an easy and granular permission language (DSL). Supports ACL, RBAC, and other access models.

Authzed SpiceDB

SpiceDB is an open-source database system for managing security-critical application permissions inspired by Google's Zanzibar paper.

Aserto Topaz

Topaz is an open-source authorization service providing fine-grained, real-time, policy-based access control for applications and APIs.

It uses the Open Policy Agent (OPA) as its decision engine, and provides a built-in directory that is inspired by the Google Zanzibar data model.

Authorization policies can leverage user attributes, group membership, application resources, and relationships between them. All data used for authorization is modeled and stored locally in an embedded database, so authorization decisions can be evaluated quickly and efficiently.

Cloudentity

It seems to be an integrated CIAM solution, and there is no standalone feature for enterprise authorization. Documentation is confusing...

Open Policy Agent

The Open Policy Agent (OPA) is an open-source, general-purpose policy engine that unifies policy enforcement across the stack. OPA provides a high-level declarative language that lets you specify policy as code and simple APIs to offload policy decision-making from your software. You can use OPA to enforce policies in microservices, Kubernetes, CI/CD pipelines, API gateways, and more.

OPA was originally created by Styra and a graduated project from Cloud Native Computing Foundation (CNCF).

Permit.IO

Permit.IO is a low-code AuthZ platform based on OPA and OPAL.

Scaled Access

Scaled Access is an european company that was acquired by onewelcome. It offers rich context-aware access control, real-time policy enforcement, fine-grained authorization, and relationship-based access control. There are APIs in the documentation but no SDKs.

Casbin

Casbin is an authorization library that supports access control models like ACL, RBAC, ABAC in Golang. There are SDKs in many programming languages. However, its configuration is pretty static in CSV files, and it's more for corporation internal and less for customer-facing authorization.

SGNL

This service looks pretty scrappy - beautiful websites without any content for developers. No doc, no video or self-service demo. I suspect its positioning is for non-tech enterprises. Not recommended.

Summary

Here is a preliminary ranking after my initial check. Ideally, I want a LaunchDarkly-like AuthZ platform - easy to integrate and operate, fully equipped with audit logs, version control, and a developer-facing web portal.


Github StarsModelsDevExPerfScore (out of 5)
Oso2.8kReBACDSL, API, SDK, web portal?3
Spicedb3kReBACDSL, API, SDK, web portal?3
permit.io840ReBACDSL, API, SDK, low-code web portal?3
Aserto Topas534ReBACDSL, API, SDK, web portal?3
FGA657ReBACDSL, API, SDK, web portal?3
Keto3.8kReBACDSL, API, SDK?2
Casbin13.4kABAC, RBACLibrary, static file for policies?1

Picking startup advisors

· One min read
  • Pick a broad set of advisors
    • 3 ~ 5 years ahead of the startup’s current status with fresh experience that they could recall things and provide immediate help
    • Late-stage advisors: better in strategic thinking; not good for too specific details.
  • Compensation: Advisor option grant: 0.25% ~ 0.75% monthly over two years

How to build developer community?

· 3 min read

What doesn't work?

  • Throwing money at the problem
    • Sponsor a hackathon without proper docs that help devs to get started.
    • Not seeding your community with ideas & examples
  • Build excellent tools but tells no one about them and how to use them.
    • Assuming you have provided ==enough context== in your contents
  • Spend too much time on low-leverage work like answering questions

What works?

  • Set clear goals: foster apps, integrations, mutual help, and word-of-mouth built with your tech.
    • a system of new useful apps built on top of your tech
    • integrations between your tech and existing products
    • helping other devs in your community
    • telling their friends about your tech
  • Provide useful content: Libraries, APIs, docs, tools, smart contracts, education, etc.
  • Establish long-term relationships: tutorials, videos, podcasts, workshops, meetups.
  • Ride the hype: explain how your tech works with the popular tech and dev tools.
  • Create superstars in your community.

How to improve docs?

  • Improving docs is high-leverage work.

    • ==Every minute spent making your docs better is worth an hour of answering individual questions - stack overflow.==
  • Understand your developer personas.

    • Listen to your customers every day.
    • Categorize your customers.
  • Developer's questions are useful clues to improve your doc.

How to categorize your developers?

Segment by skill level - seeing the table of skill levels to what they need:

skill levelsDocsToolingTutorials / Examples
BeginnerQuickstartsSDK, simple libsFrontend code, zero-to-hero video series
IntermediateReference Docs, topic-sorted guidestype annotations, dev console/studioFull-fledged example apps
AdvancedA library of ideas we’d like to see builtmini accelerator / grantsPrimitives as building blocks for larger apps

Segment by role / intent

RoleValue
hackathon / indie devmediate
dev from an integration partnermediate
future founderhigh

How to support superstars in your community?

  • Make them feel special and highlighted
  • Promote what they are doing
  • Create sharing opportunities for them to reach border community

How to run a better hackathon?

  • Know your customer - are they willing to build on your tech?
  • Know your co-sponsor - are they as high quality as you?
  • Know how to get started - people are unprepared, having limited time, and over-whelmed by your and others' techs

How to run a better bounty?

  • It doesn't work to Incentivise people to do simple and stupid tasks.
  • What works is to make productive and derivative improvements.
    • Get smart devs to develop something new
    • Crowdsource tutorials and educational content

What do you mean by useful content?

  • periordical sharing
    • web3 research
    • tech deep dive
  • official websites/doc/demo
  • articles to various channels

Diffusion of Innovation Theory

· 7 min read

It takes time for good technology to gain popularity. From the chart below, we can see that, even for the Internet, it still takes 17 years for 50% of US households to adopt it, no matter how profoundly it has changed our lives today.

diffusion-rate-of-new-category-products

That’s why I am always respectful towards all kinds of innovations - no matter how small it appears today, who would imagine it will take over the world tens of years later?

As a builder or businessman, we have a product or service to sell, and the question is - how to speed up the process for it to take over the market?

The Model

Everett M. Rogers came up with Five Intrinsic Attributes of Innovation in his Diffusion of Innovation Theory :

  • Relative advantage: how much is the product perceived as better than the existing standard? We often ask, is this product 10x better than the existing one?
  • Compatibility: how easily can I apply my experience to the new product? Customers hate changes even with new versions of an existing product, not even mention utterly new products with entirely new features.
  • Complexity: Is it easy to use?
  • Trialability: Is it easy to try?
  • Observability: Is it obvious for people to observe the change?

The Chasm

In addition to the intrinsic attributes above, there are interactions between the innovation and the market segments. We call it the technology adoption lifecycle (TAL), which categorizes customers on the market into five pieces.

TAL

The Chasm theory indicates that there is no smooth transition from early adopters to the early majority because those two market segments want different value propositions. Crossing the Chasm applies the “D-Day analogy” to solve the problem - focus, focus, focus! Focus is all it takes to attack each segment one by one - like D-Day - you take over the beach first and then move to the next target.

The Math

Every entrepreneur would dream of a beautiful S-curve for their innovation to diffuse into the market. So to unveil the math curtain, let’s see how Scott Page explains it in the book model thinker, Chapter 11: Broadcast, Diffusion, and Contagion.

The abstraction here is to partition the population into two groups:

  • informed: people who know or have something and
  • susceptible: those who do not.

Group informed starts empty, and group susceptible is all the relevant population exposed to conversion. The growth curves are in various shapes with various models to convert people from susceptible to informed.

r-shape for broadcast model

This model assumes that

  1. people capture info from public channels, and there is no word-of-mouth / mutual reference between individuals
  2. once converted, there is no moving-back

And then we get this formula

It+1=It+PbroadStI_{t+1} = I_{t} + P_{broad} \bullet S_{t}
  • PbroadP_{broad}: the broadcast probability
  • ItI_{t}: the number informed at time tt
  • StS_{t}: the number susceptible at time tt
  • Initially, I0=0I_{0} = 0 and S0=NPOPS_{0} = N_{POP}
  • NPOPN_{POP}: relevant population
  • NPOP=It+StN_{POP} = I_{t} + S_{t}
broadcast r curve

We could learn from the model that...

  • In this broadcast model, all susceptible will be eventually converted to informed, and it is just a matter of time how soon it will complete.
  • To maximize informed, we should maximize susceptible first. It means that our ads should reach as many potential customers as we can.
  • To speed up the conversion, we should make our ads as high-frequent and impressive as possible.
  • With the formula above, we could make a rough prediction for future sales with the ones in the previous two periods.

s-shape for diffusion model

This model assumes that

  1. people capture info by mutual reference and there is no public channel
  2. once converted, there is no moving-back
  3. people are randomly mixed

And then we get this formula

It+1=It+PdiffuseItNPOPStI_{t+1} = I_{t} + P_{diffuse} \bullet \frac{I_{t}}{N_{POP}} \bullet S_{t}
  • where Pdiffuse=PspreadPcontactP_{diffuse} = P_{spread} \bullet P_{contact}
US smartphone penetration

We could learn from the model that...

  • How fast the conversion happens is determined by how frequently those people contact each other and how much they would like to share the info.

mixing of r-and-s-shape for Bass model

Most consumer goods and info spread through both broadcast and diffusion. Usually, for the same product, companies are running ad campaigns; meanwhile, customers are referring new customers.

susceptible-infected-recovered (SIR) model

All the models above assume no moving back from informed to susceptible. We seldomly abandon our adoption of many home appliances - dishwashers, air dryers, etc. However, it is not the same for fashion styles, diseases, and ... your brand in the real world. In this case, things are contagious only for a particular time. People may forget your product as time passes by and then get recovered.

Let's introduce PrecoverP_{recover} the probability of recovery, then we get the susceptible-infected-recovered (SIR) model.

It+1=It+PdiffuseItNPOPStPrecoverItI_{t+1} = I_{t} + P_{diffuse} \bullet \frac{I_{t}}{N_{POP}} \bullet S_{t} - P_{recover}I_{t}

For disease control, the infected will rise first, and we hope it will eventually drop.

SIR Model

However, we hope the informed will rise to the top for our products. The SIR model produces a ==tipping point==, aka, ==basic reproduction number (R0R_{0})==.

R0=PdiffusePrecoverR_{0} = \frac{P_{diffuse}}{P_{recover}}

Products with R0>1R_{0} > 1 spread through the population, while products with R0<1R_{0} < 1 dissipate.

Take COVID as an example. Its R0R_{0} is 2 to 3. Pdiffuse=PspreadPcontactP_{diffuse} = P_{spread} \bullet P_{contact} and that is why people wear masks, keep distances from others and avoid crowds to lower the diffuse probabilities.

R0R_{0} is the ultimate question for marketing - would your marketing be contagious enough to fight against forgetfulness?

By the formal definition, the mass media version of the tipping point is usually wrong. For example, a kink is not a tipping point in the chart below for the number of Google Plus users in the first 14 days. Instead, R0R_{0} is the real tipping point.

not a tipping point

Elements of GTM - Reach, Frequency, and Quality

Finally, here is the summarization of all components that are worth optimizing for go-to-market campaigns. Please note that spreading information carries costs, unlike diseases, so we need to consider ROI.

  • Reach channels
    • Customer Segment Size: Make sure your ads are going to the susceptible targets, as many of them as possible.
    • Customer Segment Attributes: The more each person in the segment is likely to contact and share the info, the more contagious the group is. It would help if you prefer people with a strong frequency and willingness to share information. (Attacking visionaries first!)
  • How-to
    • Frequency: Make sure people cannot ignore your message by conveying it multiple times.
    • Quality Conversion Rate: Make the ads likable and memorable. It's usually something they are familiar with but still a surprise.
    • Make it big and fast. Don't forget that people will forget! Ideally, make diffusion rate > recovering rate.
  • ROI. Don't spend too less or too much on customer acquisition (LTV:CAC = 3:1).

Investment Memo Template

· 3 min read

Template for writing an investment memo to capture the learning process before spending a large amount of money.

## 20XX-XX-XX Company Name Investment Memo

| Attribute | Value |
| -------------------- | ----- |
| Category | |
| Round | |
| Raising | |
| Pre-money Valuation | |
| Post-money Valuation | |
| Allocation | |

## Summary

The decision is yes / no with an amount of X, because of the most significant argument Z.

- highlight 1, could be pros and cons.
- highlight 2
- highlight 3

Ratings: X out of 5 (benchmark against past deals)

| Attribute | Value |
| -------------------- | ----- |
| Traction | |
| Team | |
| Product | |
| Social Proof | |
| Pitch / Presentation | |
| Total | |

## Introduction

- What does the company do?
- What is the problem the company solving?
- How does the world work now in relation to this problem?
- How does the company solve the problem?
- How does solving the problem change behavior and make money?
- What is the scale of the opportunity?

## Traction / Metrics

- Discuss traction up to now (include a chart).
- Discuss main related metrics, such as churn, ACV, rake.
- Discuss revenue drivers.
- What does the go-to-market look like?

## Challenges to Growth

- What's prevented you from growing even faster?
- How will raising money solve this problem?

## Market

- Who are the customers?
- How do those customers think / act?
- How big is the opportunity these customers represent?

## Future States

- What happens to the market as the company starts to win?
- How does the company change the market and where does that lead the company?

## Compatitive Landscape

- What is the competitive landscape and how does the company defeat it?

## Team

- Who are the team and what makes the team special?

## FAQ

- The main objections the company is likely to face, and eloquently knock them down. Data is good here.
- This is probably the part where the memo is most powerful relative to a deck.

## Use of funds

- How much have the company raised in the past?
- How much the company is raising and what are they going to do with it?

Decision Making Process

- [ ] sort qualitatively
- [ ] apply filtering criteria
- [ ] create market map
- [ ] assess risks at each life stage (TAL)
- [ ] quantify uncertainties

| Stage | Early Stage Success | Cross Chasm | Mass Market Success | Mass Market Share |
|-------------|---------------------|-------------|---------------------|-------------------|
| Market | | | | |
| Product | | | | |
| Team | | | | |
| Financial | | | | |
| Total | | | | |


- [ ] perform sensitivity analysis
- [ ] calculate risk / return