Local language models

Last updated: Monday, 23 September 2024

My ongoing experiments with local language models focus on small models and the gradual, exploratory development of generative AI and conversational interfaces. By prioritising efficiency, sustainability, and step-by-step learning, these explorations aim to reduce dependence on cloud-based AI platforms while investigating the potential for multi-user collaborative tools and different interactions patterns for human-AI interactions

July 2024: Upgraded my Framework 13 laptop’s RAM and SSD in anticipation of experimenting with locally-hosted generative language models; inspired, more or less directly, by latent intimacies, and looking to (practically) apply some concepts emerging from mine and Tom’s panpneumaton/panpneumatics research.

There’s an obvious, central tension between using smaller, CPU-friendly models and the desire to work with more advanced generative AI capabilities. In this, I’m interested in pushing the boundaries of what’s possible with limited resources, through techniques like LoRA.

  • [?] How might the constraints of local hosting on personal hardware shape the development of more “appropriate” or “situated” AI technologies?
  • [?] How might the ability to easily fine-tune models for personal use reconfigure the relationship between users and AI systems?
  • [?] How can we design a system that “gathers” and orchestrates multiple models while maintaining transparency about which model is contributing what?

Tags: activity

Backlinks