JB

Learning React

Some notes and such from my learnings into React.

I’ve been trying to learn a new way to build out UX and UI since moving on from PHP templating building out sites that use jQuery UI for animations.

Flutter and Swift have been okay. Hated MonkeyC. Didn’t mind QT but it never ‘vibed’ with me, coming from HTML.

Everyone really rates React so that’s what I’m going to return to trying

Learning React

Starting Project

Last time I started learning react, I did everything in a docker container. It’s just too much overhead to get started with so now I’m just running software on my baremetal.

npx create-react-app someapp

The above pre-populates directory someapp with everything needed to start your react app.

Basic Routing

The above seems to give a basic raw ReactJS app to work with. IE no Vue or anything.

We want to keep this pretty basic. I’ll retain App.css and App.js, and make changes to these. The first thing I want to do though is create some “views”;

The directory structure that I start with looks like this:

.
├── package-lock.json
├── package.json
├── public
│   ├── favicon.ico
│   ├── index.html
│   ├── logo192.png
│   ├── logo512.png
│   ├── manifest.json
│   └── robots.txt
├── README.md
└── src
    ├── App.css
    ├── App.js
    ├── App.test.js
    ├── index.css
    ├── index.js
    ├── logo.svg
    ├── reportWebVitals.js
    └── setupTests.js

3 directories, 17 files

This has been updated to:

.
├── package-lock.json
├── package.json
├── public
│   ├── favicon.ico
│   ├── index.html
│   ├── logo192.png
│   ├── logo512.png
│   ├── manifest.json
│   └── robots.txt
├── README.md
└── src
    ├── App.css
    ├── App.js
    ├── App.test.js
    ├── index.css
    ├── index.js
    ├── logo.svg
    ├── reportWebVitals.js
    ├── setupTests.js
    └── Views
        ├── Home.js
        └── OtherHome.js

4 directories, 19 files

We can install the default react router with npm install react-router-dom.

We need to update App.js to use this router now:

% git diff src/App.js
diff --git a/routing_test/src/App.js b/routing_test/src/App.js
index 3784575..92644d0 100644
--- a/routing_test/src/App.js
+++ b/routing_test/src/App.js
@@ -1,24 +1,26 @@
-import logo from './logo.svg';
-import './App.css';
+import { BrowserRouter, Routes, Route, Link } from "react-router-dom";
+import "./App.css";
+
+import Home from "./Views/Home.js";
+import OtherHome from "./Views/OtherHome.js";

 function App() {
   return (
-    <div className="App">
-      <header className="App-header">
-        <img src={logo} className="App-logo" alt="logo" />
-        <p>
-          Edit <code>src/App.js</code> and save to reload.
-        </p>
-        <a
-          className="App-link"
-          href="https://reactjs.org"
-          target="_blank"
-          rel="noopener noreferrer"
-        >
-          Learn React
-        </a>
-      </header>
-    </div>
+    <BrowserRouter>
+      <div>
+        <nav>
+          <Link to="/">Home</Link>
+          <Link to="/home">Also Home</Link>
+          <Link to="/otherhome">Other Home</Link>
+        </nav>
+
+        <Routes>
+          <Route path="/" element={<Home />} />
+          <Route path="/home" element={<Home />} />
+          <Route path="/otherhome" element={<OtherHome />} />
+        </Routes>
+      </div>
+    </BrowserRouter>
   );
 }

Each View function is pretty simple:

% git diff src/Views/Home.js
diff --git a/routing_test/src/Views/Home.js b/routing_test/src/Views/Home.js
index e69de29..ecde389 100644
--- a/routing_test/src/Views/Home.js
+++ b/routing_test/src/Views/Home.js
@@ -0,0 +1,9 @@
+function Home() {
+  return (
+    <main id='home'>
+      <h1>Home</h1>
+    </main>
+  );
+}
+
+export default Home;

What we’re doing is not updating index.js as GPT models seem to suggest. I didn’t see the point in making the entrypoint convoluted.

I wanted to configure App.js to have a static navigational element which didn’t ‘update’ while switching views. I wanted the subset of the page to render the new view without touching the header.

I am stupid. React does this by default. The content of <routes> is replaced with each route defined. When you click a link elsewhere (like the nav), it’ll render in the pages content inside of that <Routes> block.

So if you’re confused like I was, there you go.

Styling

We’ve all seen the mass generated SaaS spam that have beautiful UI’s. These projects are being pumped out en-masse; how do these developers have such a creative flair?

Obviously they don’t.

Most people are using “component library”. To me, these seem like “Bootstrap” from React, and there are heaps of great ones. I was struggling between ant.design and shadcn.

Shadcn is right up my alley. It’s exactly what I want things to look like however it requires Next.js. As much as I want to learn that, I don’t want to get derailed right now. Ant.design allows me to just use regular React, so I’m using that for easy componentry.

At least I hope it’ll be easy.

To use Ant you need to install it with npm:

npm install antd --save

From our code, we can easily include components:

import { Row, Col, Card } from 'antd';
function Home() {

  const cards = [
      { title: "Card 1",  content: "Contact for card 1" },
  ]

  return (
    <main id='home'>
      <Row gutter={[16, 16]}>
        {cards.map((card, i) => (
          <Col key={i} span={6}>
            <Card title={card.title}>{card.content}</Card>
          </Col>
        ))}
      </Row>
    </main>
  );
}

export default Home;

Bonus Rounds

For my Vim friends out there who don’t know about macros, you’re about to learn.

Go to the first entry of the cards const. Press qa (that is, q then a) to start recording a macro into the a register.

Now vim is recording what you’re doing. yy will yank the current line under your cursor into your buffer. p will past it below the current line. Use hjkl or arrow keys to navigate to the first 1. Press <C-a>, that is control+a. It will increment 1 to 2. Go to the next 1 and do <C-a> again. Press q to stop recording.

Now you can enter 15@a in command mode and Vim will run that macro 15 times. You’ll end up with 16 cards and their numbers will each be incremented by 1.

Layout

We don’t strictly need to do this but I want to feel like I’m building something real. I don’t, however, want to spend my entire day on CSS.

Again, using Ant.Design we can just use their components.

The following is what I have come up with for App.js. I used GPT to figure out linking the routing because I just couldn’t see the links. It seems to work just fine for jumping between Home and Listings:

import { 
  BrowserRouter,
  Routes,
  Route,
  useNavigate,
  } from "react-router-dom";

import {
  Menu,
  Layout,
  } from 'antd';

// Per page CSS
import "./App.css";

// Views
import Home from "./Views/Home.js";
import Listings from "./Views/Listings.js";

// I dont know why we do this yes
const { Sider, Content } = Layout;

function SiderMenu() {
  
  const navigate = useNavigate();

  const menu_items = [
    { key: "1", label: 'Home', to: "/" },
    { key: "2", label: 'Listings', to: "/listings" },
  ];

  return (
    <Menu
      items={menu_items}
      onClick={(e) => {
        const item = menu_items.find(i => i.key === e.key);
        if (item) { navigate(item.to); }
      }}
    />
  )

}

function App() {

  return (
    <BrowserRouter>
      <Layout>

        <Sider>
          <SiderMenu />
        </Sider>

        <Content
          style=
        >
          <Routes>
            <Route path="/" element={<Home />} />
            <Route path="/listings" element={<Listings />} />
          </Routes>
        </Content>
      </Layout>
    </BrowserRouter>
  );
}

export default App;

Two things to note;

  1. The key of the menu item is expressly defined as a string. Apparently Ant handles them as strings internally.
  2. We removed the <nav> with the ‘to’ links. I ported the naming convention into the menu items and just used an on click callback to trigger the navigation using useNavigate.

Seems to work just fine and I can move on learning within something that kind of looks like a genuine website.

CRUD Cycle

The second most important thing will be performing CRUD operations against a database of some description.

Because React apps are front end applications, we’ll need to define some kind of backend for it to interact with. The easiest way for us to interface this react app with a database is to create some kind of API.

To better structure my project, I’ve just split my project up into frontend/ and backend/ directories.

To create the backend/ directory, I used the following commands:

mkdir backend 
cd backend
npm init -y
npm install express knex mariadb
npx knex init
npm install --save-dev nodemon

We install Express because using it to define API’s is way more sane than the built-in HTTP module for Node. Knex is a database serialiser and allows us to define database migrations. MariaDB allows us to connect to a database.

nodemon will refresh our server when code changes. Good for dev.

This is also the part where we might need to have Docker going because we have two moving parts to consider. This is relatively easy. I didn’t want to get bogged down into optimizing my Docker configuration for this, so I’m going with what ChatGPT generated:

# docker-compose.yaml
services:

  mariadb:
    image: mariadb:11
    container_name: mariadb
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: toor
      MYSQL_DATABASE: somedb
      MYSQL_USER: someuser
      MYSQL_PASSWORD: somepass
    volumes:
      - db_data:/var/lib/mysql

  backend:
    build: ./backend
    container_name: backend
    restart: unless-stopped
    environment:
      DB_HOST: db
      DB_PORT: 3306
      DB_NAME: somedb
      DB_USER: someuser
      DB_PASS: somepass
      BE_PORT: 3000
    volumes:
      - ./backend:/app
    ports:
      - 3000:3000
    depends_on:
      - mariadb

  frontend:
    build: ./frontend
    container_name: frontend
    restart: unless-stopped
    environment:
      HOST: 0.0.0.0
      PORT: 8000
    volumes:
      - ./frontend:/app
    ports:
      - 8000:8000
    depends_on:
      - backend

volumes:
  db_data:


# frontend/Dockerfile
FROM node:20-slim

WORKDIR /app

COPY package*.json ./
RUN npm install

COPY . .

CMD [ "npm", "start" ]
# backend/Dockerfile
FROM node:20-slim

WORKDIR /app

COPY package*.json ./
RUN npm install

COPY . .

CMD [ "npx", "nodemon", "index.js" ]

Yes, it’s stupid running the same image for both things and yes, I have to rebuild every time I update the app.

The one change that I did make from ChatGPTs recommendation was to map the code in /app. Considering that it doesn’t need to precompiled here, I figured that would be fine.

With that done, I created backend/index.js and added the following generated API to it:

const express = require("express");
const app = express();
const PORT = process.env.BE_PORT || 3000;

app.use(express.json()); // parse JSON request bodies

// Example route
app.get("/api/hello", (req, res) => {
  res.json({ message: "Hello from Express!" });
});

// Example POST route
app.post("/api/echo", (req, res) => {
  res.json({ received: req.body });
});

app.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
});

Doing a docker compose up everything seems to be working. I can access my react app from my browser and I can access the API via curl. All is fine with the world.

The next thing that we want to do is to configure our backend to interface with MySQL. I’ve opted to use Knex for this to keep initialisation fairly easy. I updated the default backend/knexfile.js to look like this:

// Update with your config settings.

/**
 * @type { Object.<string, import("knex").Knex.Config> }
 */
module.exports = {

  development: {
    client: 'mysql2',
    connection: {
      host: process.env.DB_HOST,
      port: Number(process.env.DB_PORT),
      database: process.env.DB_NAME,
      user: process.env.DB_USER,
      password: process.env.DB_PASS
    },
    pool: { min: 2, max: 10 },
    migrations: {
      directory: "./db/migrations",
      tableName: "knex_migrations",
    },
    seeds: {
      directory: "./db/seeds",
    },

  },

};

You can then jump into your backend container and use npx to create a migration file to start building your database:

user@localmachine:listings # docker compose exec backend bash
root@933b4a1f61bd:/app# npx knex migrate:make create_listings_table
Using environment: development
Using environment: development
Using environment: development
Created Migration: /app/db/migrations/20251106083358_create_listings_table.js

We’re then provided with the following file in backend/db/migrations/20251106083358_create_listings_table.js:

/**
 * @param { import("knex").Knex } knex
 * @returns { Promise<void> }
 */
exports.up = function(knex) {
  
};

/**
 * @param { import("knex").Knex } knex
 * @returns { Promise<void> }
 */
exports.down = function(knex) {
  
};

Yeah, not very helpful I’m aware.

Fortunately we can not spend much time on this and let ChatGPT just guess what to do from the prompt “create table migration knex” followed by “no this <insert above>”

/**
 * @param { import("knex").Knex } knex
 * @returns { Promise<void> }
 */
exports.up = function(knex) {
  return knex.schema.createTable("listings", (table) => {
    table.increments("id").primary();       // auto-increment primary key
    table.string("title").notNullable();    // title column
    table.text("description");              // description column
    table.decimal("price", 10, 2);          // price column with 2 decimals
    table.timestamps(true, true);           // created_at & updated_at timestamps
  });
};

/**
 * @param { import("knex").Knex } knex
 * @returns { Promise<void> }
 */
exports.down = function(knex) {
  return knex.schema.dropTableIfExists("listings");
};

Honestly, good enough. It seems to have guessed that my ‘listings’ are going to have a price. I was going for the craigslist style of giving the users the wheel but whatever.

You can apply this migration from inside your backend container:

root@35af47e3bd6b:/app# npx knex migrate:latest
Using environment: development
Batch 1 run: 1 migrations

We can now use GPT to generate a stub for our index.js to integrate knex and mysql into our API:

const express = require("express");
const knexConfig = require("./knexfile").development;
const knex = require("knex")(knexConfig);
const app = express();

app.use(express.json()); // for parsing JSON bodies

// Test DB connection
knex.raw("SELECT 1")
  .then(() => console.log("Database connected!"))
  .catch((err) => console.error("DB connection error:", err));

// -------------------
// GET all listings
// -------------------
app.get("/listings", async (req, res) => {
  try {
    const listings = await knex("listings").select("*");
    res.json(listings);
  } catch (err) {
    res.status(500).json({ error: err.message });
  }
});

// -------------------
// POST a new listing
// -------------------
app.post("/listings", async (req, res) => {
  const { title, description, price } = req.body;

  if (!title || !price) {
    return res.status(400).json({ error: "Title and price are required" });
  }

  try {
    const [id] = await knex("listings").insert({ title, description, price });
    const newListing = await knex("listings").where({ id }).first();
    res.status(201).json(newListing);
  } catch (err) {
    res.status(500).json({ error: err.message });
  }
});

const PORT = process.env.BE_PORT || 5000;
app.listen(PORT, () => console.log(`Server running on port ${PORT}`));

Juicy.

And because we’re running with nodemon we don’t have to restart our container, we can just curl this bugger:

% curl localhost:3000/listings
[]%

You can also post a listing using the API:

% curl -X POST http://localhost:3000/listings \
  -H "Content-Type: application/json" \
  -d '{"title":"some title","description":"some desc","price":200}'

{"id":1,"title":"some title","description":"some desc","price":"200.00","created_at":"2025-11-06T08:46:22.000Z","updated_at":"2025-11-06T08:46:22.000Z"}%

% curl localhost:3000/listings
[{"id":1,"title":"some title","description":"some desc","price":"200.00","created_at":"2025-11-06T08:46:22.000Z","updated_at":"2025-11-06T08:46:22.000Z"}]%

It has been such a long time since I was testing CVEs with Curl that I forgot the Content-Type header. That was bloody embarrassing. Don’t make my mistakes!

Now this is where things took a bit of a turn. We have a few moving parts and I needed to align them all nicely. The stages that we need to nail:

  1. Backend needs to accept POST and GET requests for listings
  2. Frontend needs to pull a list of listings from the API and display them
  3. Frontend needs to send new listings to the API

Drilling this down on this behaviour:

Honestly, not that hard.

We have knex installed in the backend project from earlier. We also have a table defined via a migration:

# backend/db/migrations/20251106083358_create_listings_table.js

/**
 * @param { import("knex").Knex } knex
 * @returns { Promise<void> }
 */
exports.up = function(knex) {
  return knex.schema.createTable("listings", (table) => {
    table.increments("id").primary();       // auto-increment primary key
    table.string("title").notNullable();    // title column
    table.text("description");              // description column
    table.decimal("price", 10, 2);          // price column with 2 decimals
    table.timestamps(true, true);           // created_at & updated_at timestamps
  });
};

/**
 * @param { import("knex").Knex } knex
 * @returns { Promise<void> }
 */
exports.down = function(knex) {
  return knex.schema.dropTableIfExists("listings");
};

In backend/index.js, we need to check that the database connection is okay, then run this migration on startup:

// Test DB connection
knex.raw("SELECT 1")
  .then(() => console.log("Database connected!"))
  .catch((err) => console.error("DB connection error:", err));

// Run latest migrations
knex.migrate.latest()
  .then(() => console.log("Migrations complete"))

Next, we need to define how handle our GET and POST requests. Getting listings is relatively simple with knex:

app.get("/listings", async (req, res) => {
  try {
    const listings = await knex("listings").select("*");
    res.json(listings);
  } catch (err) {
    res.status(500).json({ error: err.message });
  }
});

And adding a new listing via a POST command is super simple again:

app.post("/listings", async (req, res) => {
  const { title, description, price } = req.body;

  if (!title || !price) {
    return res.status(400).json({ error: "Title and price are required" });
  }

  try {
    const [id] = await knex("listings").insert({ title, description, price });
    const newListing = await knex("listings").where({ id }).first();
    res.status(201).json(newListing);
  } catch (err) {
    res.status(500).json({ error: err.message });
  }
});

We probably don’t want to return the new listings details but it’s totally fine for this test application to just use that as a form of verbosity.

Finally, I ran into issues with CORs which we’ll get into in a minute. This was resolved by running both frontend and backend behind a reverse proxy. Fixing this isn’t within scope of learning how to do API calls in React.

For my reverse proxy, I had to add the BE_HOST env var to listen on all interfaces:

const PORT = process.env.BE_PORT || 5000;
const HOST = process.env.BE_HOST || "0.0.0.0"
app.listen(PORT, HOST, () => console.log(`Server running on port ${PORT}`));

The following is the full index.js file. You’ll notice line 8 has a cors def. This should get around cors issues, technically:

const express = require("express");
const cors = require("cors");

const knexConfig = require("./knexfile").development;
const knex = require("knex")(knexConfig);

const app = express();
app.use(cors());
app.use(express.json()); // for parsing JSON bodies

// Test DB connection
knex.raw("SELECT 1")
  .then(() => console.log("Database connected!"))
  .catch((err) => console.error("DB connection error:", err));

// Run latest migrations
knex.migrate.latest()
  .then(() => console.log("Migrations complete"))

// -------------------
// GET all listings
// -------------------
app.get("/listings", async (req, res) => {
  try {
    const listings = await knex("listings").select("*");
    res.json(listings);
  } catch (err) {
    res.status(500).json({ error: err.message });
  }
});

// -------------------
// POST a new listing
// -------------------
app.post("/listings", async (req, res) => {
  const { title, description, price } = req.body;

  if (!title || !price) {
    return res.status(400).json({ error: "Title and price are required" });
  }

  try {
    const [id] = await knex("listings").insert({ title, description, price });
    const newListing = await knex("listings").where({ id }).first();
    res.status(201).json(newListing);
  } catch (err) {
    res.status(500).json({ error: err.message });
  }
});

const PORT = process.env.BE_PORT || 5000;
const HOST = process.env.BE_HOST || "0.0.0.0"
app.listen(PORT, HOST, () => console.log(`Server running on port ${PORT}`));

Just so you’re aware, the backend/knexfile.js has not changes:

// Update with your config settings.

/**
 * @type { Object.<string, import("knex").Knex.Config> }
 */
module.exports = {

  development: {
    client: 'mysql2',
    connection: {
      host: process.env.DB_HOST,
      port: Number(process.env.DB_PORT),
      database: process.env.DB_NAME,
      user: process.env.DB_USER,
      password: process.env.DB_PASS
    },
    pool: { min: 2, max: 10 },
    migrations: {
      directory: "./db/migrations",
      tableName: "knex_migrations",
    },
    seeds: {
      directory: "./db/seeds",
    },

  },

};

And we don’t have to run CLI commands at this point to perform migrations. In fact, just adding a new migration will force it to run, as the app is constantly refreshing itself at this point.

From now I’ll be referring to the backend as /api/ as it’s sitting behind a reverse proxy now.

The React code for pulling listings is surprisingly simple without getting into other libraries. To get started, we define a React state to store our listings in. I’ve called this “cards” because I’ll be generating Ant Design “Cards”:

import { useState } from "react";
...
const [cards, setCards] = useState([]);
const response = await fetch("/api/listings");
const data = await response.json();
setCards(data);

I’m still not 1000% sure on NodeJS conventions but what I’m reading tells me to use states for this. useState([]) above will return an empty list and a function that will update the data in that list. So setCards() will update the content of cards.

It’s fun not having to constantly define structs!

What isn’t fun is whatever this is, but I understand why it’s there.

const [loading, setLoading] = useState(true);
try {
...
} finally {
	setLoading(false);
}

This creates a state called “loading” and a setter called “setLoading”, much like we did for listing data. We use this to show a ‘loading’ message if the web call fails.

if (loading) return <p>Loading...</p>;

I asked ChatGPT for just the function to pull a GET request and it gave me all of this extra stuff to learn. Fun.

Finally, the other big thing is displaying the listing data as a set of cards. I used more Ant Design components to make displaying things a little more pleasent:

  return (
    <main id="home">
      <Row gutter={[16, 16]}>
        {cards.map((card, i) => (
          <Col key={i} span={6}>
            <Card title={card.title}>{card.description}</Card>
          </Col>
        ))}
      </Row>
    </main>

Here I have repurposed our previous card loop to show a grid of cards. I’m not doing anything to improve aesthetics beyond the basics; it’s just to show that listing data is pulled from the database, via an API.

The full Listings.js view is here:

// frontend/src/Views/Listing.js

import { useState, useEffect } from "react";
import { Row, Col, Card } from "antd";

function Listings() {
  const [cards, setCards] = useState([]);
  const [loading, setLoading] = useState(true);

  useEffect(() => {
    const fetchListings = async () => {
      try {
        const response = await fetch("/api/listings");
        if (!response.ok) throw new Error("Failed to fetch listings");
        const data = await response.json();
        setCards(data);
      } catch (err) {
        console.error(err);
      } finally {
        setLoading(false);
      }
    };

    fetchListings();
  }, []);

  if (loading) return <p>Loading...</p>;

  return (
    <main id="home">
      <Row gutter={[16, 16]}>
        {cards.map((card, i) => (
          <Col key={i} span={6}>
            <Card title={card.title}>{card.description}</Card>
          </Col>
        ))}
      </Row>
    </main>
  );
}

export default Listings;

Now for creating a new listing. Oh boy.

Let’s start off with the view then, because that’ll make a lot of sense:

  return (
    <main id="newlisting">
      {contextHolder}
      <Form
        name="basic"
        labelCol=
        wrapperCol=
        style=
        initialValues=
        onFinish={onFinish}
        onFinishFailed={onFinishFailed}
        autoComplete="off"
      >
        <Form.Item
          label="Title"
          name="title"
          layout="vertical"
          rules={[{ required: true, message: "Please input your title!" }]}
        >
          <Input />
        </Form.Item>

        <Form.Item
          label="Price"
          name="price"
          layout="vertical"
          rules={[{ required: true, message: "Please input your price!" }]}
        >
          <Input />
        </Form.Item>

        <Form.Item
          label="Description"
          name="description"
          layout="vertical"
          rules={[{ required: true, message: "Please input your description!" }]}
        >
          <TextArea />
        </Form.Item>

        <Button type="primary" htmlType="submit" layout="vertical">
          Submit
        </Button>
      </Form>
    </main>
  );

Honestly this is pretty self explanatory. I copy and pasted the example from Ant Design and rejigged it for our post data. It displays a nice enough little form with labels above the text boxes.

{contextHolder} is a React expression (a word that I’ve just learned). All that I understand from the documentation is that it allows me to trigger alerts in that spot. This will be important in a moment, so let’s look at the “onFinish” function.

First of all, we have the following snippet that shows how we’re taking the form data and posting it to the listings api. We’re just going in raw, because we’re learning:

      const response = await fetch("/api/listings", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify(values),
      });

Actually, do you even want to do client side data validation in React? Or is that a PHP thing? Because the user can totally bypass the UI and make their API calls from curl, so data validation is basically only a guard rail for regular users.

Ah, no matter.

Second section is showing that funky Antd success message:

messageApi.open({
	type: "success",
	content: "You posted your listing",
});

One thing annoyed me though; we don’t get redirected anywhere. It just flashes some green and then… well nothing. You just press submit again I guess. To get around this, I whipped out the time honored setTimeout

import { useNavigate } from "react-router-dom";
...
const navigate = useNavigate();
...
setTimeout(() => {
	navigate("/listings");
}, 1000);

We repurpose that navigate function from earlier to redirect the user to the listings page about a second after they’ve submitted their listing. The workflow is good enough the way that it is. People could mash the submit button but we’re not here to bother with UX. Just post and get.

Because this now redirects us back to the listings, we’ve come full circle. Here’s the full file with my mix of Antd code, ChatGPT assistance, and me touching up what I can to make it all work nicely:

import { Button, Form, Input, message } from "antd";
import { useNavigate } from "react-router-dom";
const { TextArea } = Input;

function NewListing() {
  const [messageApi, contextHolder] = message.useMessage();
  const navigate = useNavigate();

  const onFinish = async (values) => {
    try {
      const response = await fetch("/api/listings", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify(values),
      });

      if (!response.ok) {
        throw new Error(`Server error: ${response.status}`);
      }

      const data = await response.json();

      messageApi.open({
        type: "success",
        content: "You posted your listing",
      });

      console.log("Success:", data);

      setTimeout(() => {
        navigate("/listings");
      }, 1000);
    } catch (err) {
      messageApi.open({
        type: "error",
        content: "Something went wrong: " + err.message,
      });
      console.error(err);
    }
  };

  const onFinishFailed = (errorInfo) => {
    console.log("Failed:", errorInfo);
  };

  return (
    <main id="newlisting">
      {contextHolder} 
      <Form
        name="basic"
        labelCol=
        wrapperCol=
        style=
        initialValues=
        onFinish={onFinish}
        onFinishFailed={onFinishFailed}
        autoComplete="off"
      >
        <Form.Item
          label="Title"
          name="title"
          layout="vertical"
          rules={[{ required: true, message: "Please input your title!" }]}
        >
          <Input />
        </Form.Item>

        <Form.Item
          label="Price"
          name="price"
          layout="vertical"
          rules={[{ required: true, message: "Please input your price!" }]}
        >
          <Input />
        </Form.Item>

        <Form.Item
          label="Description"
          name="description"
          layout="vertical"
          rules={[{ required: true, message: "Please input your description!" }]}
        >
          <TextArea />
        </Form.Item>

        <Button type="primary" htmlType="submit" layout="vertical">
          Submit
        </Button>
      </Form>
    </main>
  );
}

export default NewListing;

I won’t lie; I didn’t want to really use ChatGPT for much of this.

It was useful for discovering Knex and avoiding reading their horrible documentation. It was also super useful for quickly getting me through some Docker configurations and for picking between certain libraries and approaches.

ChatGPT just could not help itself though and it gave me way more information than I was hoping to get from it. I was trying to use it to aid in decision making, and it instead gave me a bunch of logic that I then couldn’t get out of my brain.

All up, the CR of the CRUD works fine. We could update the code for the UD but honestly, I’ve achieved the core of this challenge and that’s to perform set and get API calls via React.


NextJS

I need to do up a website for Oceania Gamers as fast as possible to help distribute some information about our move to Matrix. Might as well use this is a leap pad for NextJS.

I’m using NextJS with Tailwindcss. I’ll cover this in another post because this is already reading a 5,000 word essay.