API Reference

Welcome to Quantsy

In a space inundated with big-brained complexity, Quantsy really tries to keep it simple.

The business problem we’re solving for is clear: methodical price estimates are a huge need in the NFT space. But for our work to mean anything to anyone, the how must be as unambiguous as the what.

That’s why we find it so important to educate users on Quantsy's approach, both simply and effectively. Without understanding, there is no trust. And for us, without trust, there is no business model.

If we want our work to be embraced, it must understood. So let’s start with the basics.

Fundamentals: Value 101

Quantsy is in the business of approximating value.

Assuming no funny business, anything is fundamentally worth what someone pays. An exchange by definition is a moment in time when two parties come together, agree on a price, and trade.

At a base level, this must be true, or it stands to reason there would be no exchange at all. As such, an asset’s sales history represents explicit moments value was assigned, making sales data the authoritative signal for building pricing models; more authoritative than a floor price, or a list price, or an offer, or an influencer’s opinion, or anything else. Sales are the most reputable conveyor of value, and when it comes to NFTs, those data are immutably available on-chain for all to see.

Handle With Care

However, these data must be handled responsibly. For anyone wanting to build sophisticated valuation models, there are potential pitfalls.

Consider what building a sales data model looks like in the face of these questions:

  • What happens if a token has never sold?
  • What happens if a token’s collection value has spiked or cratered since it last sold?
  • What happens if the data are filled with wash or stolen token trades?

Data can be volatile. They can noisy. Sometimes, even suspect. And that’s naively assuming you have any data. With non-fungible tokens in particular, there often is a dearth of sales, and no data can be just as debilitating as bad data.

Moreover, issues with data relevancy can be the most insidious challenge of all. Errors of both types (inclusion of irrelevant data & omission of relevant data) are hard to identify and can dramatically skew a model’s output.

There is an abundance of challenges for those that wish to steward these data well. Ultimately, every rigorous pricing model must address those challenges to earn their customer’s trust.

In One Sentence

So, how does Quantsy work?

Quantsy produces NFT price projections by enriching sales data, grouping them into relevant cohorts, and feeding the result to a machine learning model.

That’s us, (over)simplified into a single sentence.

Granted, a machine learning model isn’t novel on its own. However, an ML model with sufficient, relevant, high-quality inputs… is priceless. That’s where the true magic resides.

Below, we unpack this sentence further to understand conceptually how the Quantsy engine runs.

Data Science Techniques

Enriching Sales Data

At Quantsy, we’ve developed proprietary data science techniques that allow us to generate insights on token value where others cannot.

Without publicly divulging the specifics of how these techniques work, we can understand their impact and importance by appreciating the problems they help solve.

Let’s take a look at one such problem and the technique we developed in response.

Challenge: Volatility

Lifetimes ago, in May of 2021, the Bored Ape Yacht Club went live. Within nine months, the cheapest BAYC for sale was over 100 ETH, up by a factor of 1,250x against its 0.08 mint price. Even crazier, the more highly-desired tokens in the collection, such as Solid Golds, were fetching 10,000x mint price. BAYC, as did many collections, exhibited extreme volatility in a short time span.

Today, higher priced tokens, such as Solid Gold apes, sell less often. As prices rise on these tokens, invariably, there fewer buyers with the discretionary funds and liquidity to buy them. This notion isn’t unique to NFTs — there are also only so many people with the means to buy ultra expensive homes, cars, businesses, etc. For NFTs, it doesn’t take an expert to confirm what our intuition already knows: there are more sales at the floor than at the ceiling.

With these more expensive assets, the result is fewer data points for a sales model to ingest, with longer spans of time between any incoming signal of value. It isn’t strange for weeks, even months, to pass with zero sales of a Solid Golds ape.

Why does this matter?

What if, in the meantime, the surrounding market completely changes? Months can be a long, long time in the NFT space. What happens if a token’s collection value has spiked or cratered since it last sold? How does this volatility impact a model, particularly with tokens that have fewer sales? Are older sales data still reliable? How can a model stay current with stale inputs?

All great questions.

Answer: Normalize

Quantsy’s data science specifically addresses these challenges. We normalize for volatility across a collection’s lifespan by mathematically transposing every sale to reflect today’s market. “Stale” data can be commingled with the data of today; every sale has a common denominator.

Why does this matter?

Because it deepens the data pool from which we create pricing models and produce insights. More quality inputs means better price estimates, more often. Enriching sales data through normalization is Quantsy’s way of adjusting for the macro changes that can make it hard to see meaningful patterns.

Without this method, older data are a liability. Think again about BAYC. Early, unprocessed sales from the summer of ‘21 can not reasonably coexist with more recent sales, on the same axis. If we tried, the model would strain to establish a predictive function between variables and price (with a massive range), and no significant relationship would be found.

But… is that true? No relationship? Or were poor modeling choices made?

There can be valuable insights to glean from relationships that exist between variables, even in the past. Normalizing for volatility is a method Quantsy developed to unlock those insights, turning a potential liability into a strength. It is just one technique we engineered to ensure that, as often as possible, there is adequate, good inputs to feed our models and produce price estimates.

Having those data, is step one. Knowing when, and when not, to use them is step two.

Right Data, Right Time

We know that a model is only as good as its inputs. But there is another crucial consideration to be made: data relevancy.

Challenge: Relevancy

If we wanted to estimate the value of a lime green luxury sports car, we’d most likely gather as many car sales data as possible from a trusted source and build a predictive model. The next question is, which data should we feed the model — meaning, which are relevant?

Clearly, one approach is to analyze data as germane to the use case as possible, starting with the exact match. Here, that would be recent sales of cars that are luxury branded, of the sports type, and lime green in color.

But all too often (especially with non-fungible assets), requiring an exact match is too stringent; the more granular the filters, the fewer the results. Too narrow of a focus, and we can’t model an outcome. Too wide of a focus, and our model is bad. It’s a challenging tradeoff.

Beyond that, it may be presumptuous to assume an exact match criteria is actually best for relevancy at all. Consider this: what piece of information is more helpful when estimating your home’s value: the price you paid ten years ago, or the price of your neighbor’s house, a very similar house, that sold last week? The answer isn’t always clear.

For our car example, maybe it makes sense to model sales of luxury sports cars in all exotic colors, not just lime green? Or, maybe we should include data of all lime green sports cars, not just those of the luxury class? Widening the aperture may be the best move, but by how much? Which data choices should we make to accomplish our goals? The permutations can be dizzying.

Why does this matter?

Relevancy, it turns out, is relative for each and every token, dynamically evolving with every sale in the collection. Every pricing model, Quantsy or not, must wrestle with, and answer, this critical question: which is the right subset of data to leverage for the most accurate modeling outcome?

Answer: Intelligent Cohorting

We answer this question with another proprietary technique that allows our algorithm to create intelligently compiled cohorts for each token across every supported collection: a systematic, mathematical approach that curates which sales belong, and don’t belong, in a token’s custom pricing model.

Why does this matter?

First, is scale. Quantsy’s intelligent cohorting plays uniformly across every collection, allowing us to provide price estimates without a heavy-handed, manual process. More collections. Less time.

Second, is trust. As with valuations of all non-fungible assets, there is a component of subjectivity. Appraising a historical painting may include analysis of other paintings by the same artist, or similar paintings within the same art movement, or paintings from the same time period or region. Or… the appraisal may include sales of only the painting itself. Certainly, it all depends. It’s relative.

And that’s the key point here: if data relevancy is relative, there is an even larger need for a methodical, objective approach to how it is determined.

Let’s think again about lime green luxury sports cars, now with Quantsy in our toolkit.

To start, Quantsy would analyze patterns of the entire collection to understand whether the sales of a lime green luxury sports car must be their own cohort. It may be that these cars are so unlike any other vehicle, the model must only include sales of this exact kind of car. Nothing else is comparable. No other data would be relevant. Maybe we have sufficient, exact-match data and are able to produce a price estimate. Maybe we aren’t. That line must be known and respected.

Conversely, Quantsy can tell us whether lime green luxury sports cars are not totally unlike everything else, and whether there are similar assets whose data could usefully relevant.

Perhaps, electric yellow (not just lime green) luxury sports cars are similar enough and ought to be included in the relevant cohort. Perhaps, lime green luxury SUVs (not just sports cars) are viably comparable and ought to be included. Or perhaps, the data reveal that all colors, all types, and all brands should go in the model. The combinations are endless!

The comforting thing is that the data, though not obvious at first, are usually telling a pretty strong story. One just needs the right tools to decipher it.

The Quantsy Method

Ultimately, Quantsy wins when our prices are trusted, and we’re trusted when we’re understood. For now, after reading a bit about our data science and general approach, we hope four things are abundantly clear to you.

  1. We understand the importance of our company’s mission.
  2. We comprehensively grasp the problems we’re trying to solve.
  3. We are confident in the product we offer.
  4. We want it to matter to you.

It is so easy to feel lost in the NFT space, which at times, can feel like a black box, devoid of reasonable, unbiased, trusted information. For valuations at least, it doesn’t have to be that way.

That’s why we founded Quantsy. There can be a method to the madness.

*Partner with us and learn more about our proprietary techniques, contact us at hello@quantsy.io.