JH

Joshua Holmer

@expedientfalcon

Models

Voyage's next-generation general purpose embedding model - ideal for Continue which uses embeds both for @codebase and @docs

8b parameters version of Qwen3's large-context reranker model for running on a local Ollama instance: https://huggingface.co/ExpedientFalcon/Qwen3-Reranker-8B-Q8_0-GGUF

8b parameters version of Qwen3's large-context embedding model for running on a local Ollama instance: https://huggingface.co/Qwen/Qwen3-Embedding-8B-GGUF

4b parameters version of Qwen3's large-context reranker model for running on a local Ollama instance: https://huggingface.co/ExpedientFalcon/Qwen3-Reranker-4B-Q5_K_M-GGUF

8b parameters version of Qwen3's large-context embedding model for running on a local Ollama instance: https://huggingface.co/Qwen/Qwen3-Embedding-8B-GGUF

0.6b parameters version of Qwen3's large-context embedding model for running on a local Ollama instance: https://huggingface.co/Qwen/Qwen3-Embedding-0.6B-GGUF

0.6b parameters version of Qwen3's large-context reranker model for running on a local Ollama instance: https://huggingface.co/ExpedientFalcon/Qwen3-Reranker-0.6B-Q4_K_M-GGUF

4b parameters version of Qwen3's large-context embedding model for running on a local Ollama instance: https://huggingface.co/Qwen/Qwen3-Embedding-4B-GGUF

4b parameters version of Qwen3's large-context reranker model for running on a local Ollama instance: https://huggingface.co/ExpedientFalcon/Qwen3-Reranker-4B-Q4_K_M-GGUF

0.6b parameters version of Qwen3's large-context reranker model for running on a local Ollama instance: https://huggingface.co/ExpedientFalcon/Qwen3-Reranker-0.6B-Q4_K_M-GGUF