Quoth is an all-chain NFT authentication oracle complete with AI and ML search, mint and bridge SDKs and APIs.

Quoth is redefining NFT ownership through infrastructure, its oracle API and SDKs enable any wallet, marketplace or protocol to accurately search and list authenticated NFTs. 


Its artificial intelligence is in the process of indexing every NFT in existence into comprehensive data sets.  The platform offers a Google-like NFT search using everyday semantics combined with machine learning. Instant oracles offer authentication and suggested behavioral NFT buy lists.  In addition, the platform offers all-chain fractional bridging.

Artists, collectors, buyers, traders, marketplaces, wallets.

Quoth’s IDO is happening on January 26, 2022 at 10:00 UTC on RedKite and Oxbull.

Quoth Token

Utility token on Binance SmartChain.


  • User rewards for rewards for neural net training and IA results feedback
  • Rewards for running on-chain oracles used to serve API endpoints
  • Rewards for bridge staking


  • Payments for access to API’s across chains (Chainlink & TheGraph model)
  • Bridging fees
  • Staking to secure the bridge

There is a total supply of 55,000,000 tokens.

  • Seed
  • Private purchasers
  • IDO
  • Team
  • Advisors
  • Marketing
  • Liquidity
  • Community incentives
  • Ecosystem Development Fund

Token DistributionNo. of TokenPercentage(%)Offering Price (USD)Value (USD)
Private Investors5,716,66710.4%0.45$2,572,500
Community Incentives9,350,00017.0%  
Ecosystem Development Fund13,750,00025.0%  

No. The supply is fixed, there isn't any type of burn/tax on them.

  • Wallets and marketplaces verifying NFTs to ensure authenticity and originality.
  • Collectors get true market rates of NFTs rather than floor prices.
  • Marketplaces authenticating NFT projects for pre listing to ensure authenticity.
  • Collectors bridging assets to all chains to gain access to deeper liquidity and yields using popular defi protocols.
  • Platforms securely bridging NFT assets to other networks to access a wider range of collectors.
  • Owners looking to maximize returns by fragmenting the original NFT into NFTs on multiple chains.
  • Future use case as NFT licensing mechanism replacing traditional copyright with a crypto native solution.
  • AI recommendation engine for NFT appraisals and similar for sale options algo.

Artificial Intelligence

  • API Service
  • Data Validator Server.
  • Payment Processor.
  • Data Publisher.

The module is responsible for the interaction of the user interface with the system. Provides a REST API. Interacts with the database and the Data Validator service (by TCP protocol) as a provider of data validation through the form.

The module is responsible for processing data validation requests. Designed with a modular approach to add new providers in the future. Supports both automated providers and manual data processing.

The service is responsible for accepting payments. Built with a modular approach for the ability to integrate additional payment providers, including payment in cryptocurrency.

The service provides functionality for sending data to the blockchain and IPFS network.

It will identify NFTs on blockchains and pull the following data:

  • Contract addresses
  • All associated transactions, timestamps, price changes and originator information
  • Metadata
  • Links to the data, contained in the metadata

The data then will be processed and tagged and processed by the AI, identifying the type of data (image, video, audio, text etc.), performing semantic image description, image-to-text, sound-to-text, video-to-text services and creating a database, ready for reverse image, video and audio searches.

  • It will be based on well-tested stack of ML libraries and tools (mostly Python-based): TensorFlow/Keras for Deep Learning; Milvus for semantic index; SpaCy, Gensim, NLTK for text processing; other libraries for data management and overall orchestration.


  • Node.js


Frontend Architecture

  • SSR support for the correct formation of a response to requests (prerender in social networks, indexing by search bots)
  • Modular architecture for easier integration of new networks, wallets and interfaces in the future.
  • Next.js will be used as a framework, which is based on React.js.

Smart Contracts

  • The token will be based on the Binance Smartchain for the IDO, and later it will be based on a ERC-721 contract (ERC-721 is a standard that is compatible with BEP standards, the basis is the implementation of ERC-721, which in essence is not different from the BEP).

    The contract on the Binance network will be expanded with additional functionality to enable the liquidation of the token when its counterpart is released on the Ethereum side.


  • Interaction goes through CloudFront (CF) as the main provider of caching and CDN
  • The first user request to the site generates an SSR, the result of which is cached at the CF level
  • Primary data is requested through the Backend API. Data caching must be implemented on the Backend API side.

Data Publishing Process

The system is a minter of tokens, however, the minting of tokens is carried out directly with the ownership of the original user. Accordingly, Quoth is only a provider without the ability to manage the minted token in the future. The meta information of the token, as well as the image, must first be uploaded to the IPFS network. https://nft.storage/ will be used as a gateway.

To preserve the originality of the data, the token should not be reminted. An additional contract will be used to make changes. Since the owner of the token is the end user, it is he who can make changes. However, the contract will be built with the ability for the user to provide only the signature of the changes to the Quoth system, and it will send the changes to the contract itself, paying the commission with its account.

The token transfer includes three steps:

  1. Request for token transfer including the address, who is given the opportunity to burn the token - the owner's transaction or the owner's transfer of the signature to the system to be able to send the transaction from another account
  2. Minting the token on the Ethereum side
  3. Burning of the token on the Binance side from the address specified in clause 1. With the provision of the address and token ID on the Ethereum side.

The transfer of a token to another account can be carried out directly through sending an end user transaction to the blockchain network and is not implemented on the system side.

Text Based Artificial Intelligence Overview

  • Process existing and new NFTs and index all textual content (text extracted from image or video) based on AI/ML (Artificial Intelligence/Machine Learning). 
  • Classify existing and new quotes (millions) according to automatically discovered categories (tags). The categories will be determined according to analysis of existing NFTs and quotes. 
  • Provide originality (plagiarism) check for text of newly created NFT quotes vs. existing NFTs.

Scope and Implementation

  • Component (with API endpoint) for processing of new NFTs, both created outside Quoth platform and newly accepted (minted) quotes. Text extraction from images or videos will be implemented using 3rd-party service, such as Google Cloud vision API. 
  • Component (API endpoint for Data Validator Service) for testing uniqueness and categorization (tagging) of newly submitted quotes. 
  • Specialized storage for text indices.

Originality (Plagiarism) Detection

The originality detection component will perform two kinds of checks: 

  • Literal compare: Two quotes will be regarded identical if they have coinciding punctuation and spelling errors. Common spelling errors will be corrected automatically using ML models trained on existing quotes and NFTs (their names and descriptions). 
  • Paraphrase detection: Two quotes will be regarded identical if they have similar semantic representation, up to the some threshold. The threshold and exact type of semantic representation type will be optimized during the project implementation.

Quote Tagging

  • Self-supervised training of custom language model (word embedding); in case of small datasets (less than few gigabytes of text), some suitable large text dataset, such as Wikipedia, is added to the target dataset. 
  • Based on the language mode, semantic representation of each document in the dataset is calculated. 
  • Several clusters are found in the semantic space of the dataset to find distinct ‘islands of meaning.’
  • For the visual inspection of the dataset, a two-dimensional map of the document collection is calculated; this map connects the topics, keywords, and documents. Note: in the two-dimensional representation the topics only approximately correspond to visual clusters (‘blobs of meaning’). 
  • Optionally meaningful names can be manually inferred for the found clusters. 
  • Finally, for longer documents, the last 4 steps are first performed for short (10-200 words) excerpts from original documents and then repeated to produce ‘document classes.’ 
  • This approach enables to tag the document collection with minimal human labeling. The resulting topics (usually, from several tens to several hundreds of topics are found) are far easier to label than manual labeling the initial document collection. 
  • In addition, the visual representation of text collection provides overview of the text collection, relations between groups of text and helps to select most important tags (document categories).

Text Extraction

  • Given very different fonts, languages and artistic effects used in the existing NFTs, it is proposed to use existing cloud services for text extraction. 
  • As a basic variant, Google vision API will be used (supports tens of languages, handwritten text, and videos), but other bit players will be also examined, such as OCR services from Microsoft Azure and Amazon AWS. 
  • After text extraction, texts in non-supported languages, text with large number of spell errors, and texts shorter than a threshold will be discarded. 
  • On post-processing, spelling errors will be corrected.


Chris Boundikas - CEO - https://www.linkedin.com/in/chris-boundikas-903410139/

Chris has been involved in the blockchain space since 2011. Starting as a trader, he founded North America’s largest GPU based mining rigs as well as launching Anguilla legal ICO framework while consulting the Government. Afterwards, he has been an advisor to multiple blockchain projects before shifting to a full-time position at Quoth.  

CTO Alex Dolgov - https://www.linkedin.com/in/alexeidolgov/  

A blockchain expert since 2012, (previously) holding key positions at reputable development and advisory firms. Has been Involved in the development of multiple high-level blockchain projects and aims to constantly keep pushing innovative concepts.

Cloud Office - Main team situated in Europe and the Cayman islands, with additional support from team members across the globe (Belarus, Canada…).

25 team members, including : 

  • 15 team members in blockchain/development/design 
  • 10 team members in operations/marketing/business development

Join the community

Visit our Telegram group to meet the growing community of investors & users, and hear all the latest updates.