Skip to content
Snippets Groups Projects
Select Git revision
  • main default
1 result

eatfast-website

  • Clone with SSH
  • Clone with HTTPS
  • ViaResto

    ViaResto is a website developed by ViaRézo for monitoring the waiting time in CROUS restaurants.


    Installation

    Use the template files in ./backend and ./frontend to define the configuration and environment variables.


    In development mode

    Run the backend server

    Start the mysql container
    docker run -p 3306:3306 --env-file .env -d mysql:latest mysqld --default-authentication-plugin=mysql_native_password
    Build and start the tensorflow/serving container
    docker build -t model -f ./Dockerfile-model . && docker run -p 8501:8501 --env MODEL_NAME=model -d model
    Form ./backend, install the dependencies by executing pip install -r requirements.txt and run directly the uvicorn server :
    python -m uvicorn main:app --reload --port=3001 --host 0.0.0.0.


    Run the frontend server

    Open a new terminal and run :

    $ cd frontend
    $ npm install
    $ npm start

    In production mode

    From /backend execute docker-compose build && docker-compose up -d.
    From /frontend run npm run build and serve the build generated.


    Linting

    So the new commits can be deployed, you need to use the linter for the backend and the frontend.
    To lint the backend code, run pycodestyle --config=./setup.cnf ./backend. To fix the errors, you can use autoformat with autopep8 running autopep8 --in-place --global-config=./setup.cnf --recursive --aggressive ./. If you use a virtual environment called env/ you should add --exclude=./env in both command so the linter ignore the folder.
    To lint the frontend, run npm run lint. You can fix most of the errors using npm run format.


    Contributing

    If you would like to contribute, contact us at support@ml.viarezo.fr.


    License

    The project doesn't have any license. However, the crowd-counting AI model used is based on this repository: https://github.com/ZhengPeng7/W-Net-Keras. The dataset used is ShanghaiTech Part B. The model is given a 3-channel image and generates a density map of half the size of the input image. The estimated number of people is obtained by summing on all pixels of the density map.