wglint / 1_test

Simple model to make addition and answer is send to supabase

  • Public
  • 20 runs
  • GitHub

Input

Output

Run time and cost

This model runs on CPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

What do and how work this model

What do this model

This model name 1_Test do not use ML algorithm, it’s a test for know how to use replicate and cog. This model use supabase librairy made by community for insert output of the model.

User put a and b integer and sum of a+b. This model while give a answer if user make good or wrong calcul and seed this to supabase.

How this model work

Before start, we need to have Cog and Docker. For learn Cog, click her for Github Doc. But for start, use brew for install Cog :

brew install cog

After for this model, i use only 3 files :

All the code is in this repo Github.

Or, let check all code her :

.env

SUPABASE_URL="URL"
SUPABASE_KEY="KEY"

cog.yaml

# Configuration for Cog ⚙️
# Reference: https://github.com/replicate/cog/blob/main/docs/yaml.md

build:
  # set to true if your model requires a GPU
  gpu: false

  # a list of ubuntu apt packages to install
  # system_packages:
  #   - "libgl1-mesa-glx"
  #   - "libglib2.0-0"

  # python version in the form '3.11' or '3.11.4'
  python_version: "3.11"

  # a list of packages in the format <package-name>==<version>
  # python_packages:
  #   - "numpy==1.19.4"
  #   - "torch==1.8.0"
  #   - "torchvision==0.9.0"

  python_packages:
    - "supabase"
    - "python-dotenv"

  # commands run after the environment is setup
  # run:
  #   - "echo env is ready!"
  #   - "echo another command if needed"

# predict.py defines how predictions are run on your model
predict: "predict.py:Predictor"
image: "r8.im/wglint/1_test"

predict.py

# Prediction interface for Cog ⚙️
# https://github.com/replicate/cog/blob/main/docs/python.md

import os 
from supabase import create_client, Client
from cog import BasePredictor, Input, Path
from dotenv import load_dotenv
import time
import asyncio

class Predictor(BasePredictor):
    def setup(self) -> None:
        """Load the model into memory to make running multiple predictions efficient"""
        # self.model = torch.load("./weights.pth")

        """ CREATE SUPABASE CLIENT """
        load_dotenv()
        self.key : str = os.environ.get("SUPABASE_KEY")
        self.url : str = os.environ.get("SUPABASE_URL")
        self.supabase : Client = create_client(
            self.url,
            self.key
        )

    def supabaseAnswer(self, a: int, b: int, sum: int, answer : bool) -> str:
        try :
            self.supabase.table('replicate').insert({
                "first" : a,
                "seconde" : b,
                "sum" : sum,
                "answer" : answer
            }).execute()
            return f"Succes"
        except :
            return f"Error"


    def predict(
        self,
        a: int = Input(description="Enter a int number (e.g. 8)", default=8),
        b: int = Input(description="Enter a int number (e.g. 5)", default=1),
        sum: int = Input(description="Enter sum of this 2 int a+b (e.g. 8+5 = 13)", default=2),
    ) -> str:
        """Run a single prediction on the model"""
        # processed_input = preprocess(image)
        # output = self.model(processed_image, scale)
        # return postprocess(output)

        print("\n\n\nStart to see if you got the good answer...")
        if( (a+b) == sum ):
            statut = self.supabaseAnswer(a, b, sum, True)
            print("Good, send answer to supabase gone !")
            return f"Hello ! You get good answer because {a} + {b} = {sum} !\nSupabase send : {statut}"
        else :
            statut = self.supabaseAnswer(a, b, sum, False)
            print("Not good, send answer to supabase gone")
            return f"Hello ! You wrong... Good answer of {a} + {b} is {a+b} and not {sum} !\nSupabase send : {statut}"

Let’s check my other model !