Skip to content

2025

How to run PowerShell on Android

Overview

A recent update to Android OS enables running a Debian Linux distribution on an Android phone. With a workable Linux distribution available on mobile, we now have the ability to install and run PowerShell in this Linux environment.

Prerequisites

Google Pixel: The Linux terminal feature is still very new so Google Pixels are currently the only phones receiving the update. As with any other Android feature--non-Pixel Android phones will likely be receiving the update in the coming months.

March 2025 software update: The Linux terminal was included in the March 2025 Pixel update, so make sure to install the latest software update.

Procedures

Enable the Linux terminal

The Linux terminal can be enabled from Developer options. If you don't have Developer options enabled, follow these procedures to enable it.

Once enabled, navigate to Settings > System > Developer options. If you've received the latest update, you'll see the Linux development environment option under the Debugging section. Under this option, enable (Experimental) Run Linux terminal on Android.

dev options linux environment

enable linux environment

Install Linux terminal

Once the Linux development environment is enabled, open your app drawer and open the Terminal app--you'll see "Install Linux terminal". In the corner of the screen, click Install. The install will take a few minutes.

install linux terminal

Install PowerShell

The Google Pixel runs on an ARM64-based processor, so we'll follow these procedures for installing PowerShell as a binary archive, as opposed to from a package manager like APT. Simply copy the code from the procedures and paste into the terminal.

Important

The example from the above procedures specifically references the x64 edition of the PowerShell binary (e.g.: powershell-7.5.0-linux-x64.tar.gz). The correct binary for arm64 processors is powershell-7.5.0-linux-arm64.tar.gz. The code below are the same procedures as provided by Microsoft, but for arm64 instead of x64, which will be compatible with Google Pixel:

curl -L -o /tmp/powershell.tar.gz https://github.com/PowerShell/PowerShell/releases/download/v7.5.0/powershell-7.5.0-linux-arm64.tar.gz
sudo mkdir -p /opt/microsoft/powershell/7
sudo tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7
sudo chmod +x /opt/microsoft/powershell/7/pwsh
sudo ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh

That's all you need! Simply enter pwsh to run PowerShell on your Android.

success

Set PowerShell as the default shell (Optional)

Bash is the default shell for this terminal, but you can change your default shell by running chsh -s <shell binary> <username>. The below example will set PowerShell as the default shell for our user (droid):

sudo chsh -s /usr/bin/pwsh droid

Copy/paste

To skip all the explanitory information and simply install PowerShell, copy the following code into your Linux terminal:

curl -L -o /tmp/powershell.tar.gz https://github.com/PowerShell/PowerShell/releases/download/v7.5.0/powershell-7.5.0-linux-arm64.tar.gz

sudo mkdir -p /opt/microsoft/powershell/7

sudo tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7

sudo chmod +x /opt/microsoft/powershell/7/pwsh

sudo ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh

sudo chsh -s /usr/bin/pwsh droid

pwsh

Troubleshooting

The Linux feature is still in the experimental stage, and as such it can be pretty glitchy. Below are some tips to resolve any issues you may run into.

Enable notifications

The terminal displays a persistant notification while it's running.

terminal notification

As a result--when notifications are disabled--the app tends to act up. Ensuring notifications are enabled avoids some of these issues.

allow notifications

Pause the app

If the app is acting erratically or not responding, I've found that pausing the app can resolve some issues. Press and hold on the app icon and select Pause app. Then open the app again and when prompted, select Unpause app.

Recovery

If the app is still not acting properly or keeps crashing, you can reset the app's data by clicking the settings "gear" icon in the upper right corner, navigate to Recovery > Reset to initial version, and click Reset.

recovery reset

Warning

This will delete all data related to the Linux environment on the phone.

Re-enable the Linux environment

There are times when the app is acting up so much that Recovery isn't even an option. In this case, simply disabling, then re-enabling the Linux environment via Developer options (as described above) will reset the app.

Warning

As with the Recovery option, this will also delete all data related to the Linux environment on the phone.


Share on Share on

Running an LLM in a CI pipeline

Overview

With the recent explosion of AI and large language models (LLM), I've been brainstorming how to take advantage of AI capabilities within a CI/CD pipeline.

Most of the major AI providers have a REST API, so I could of course easily use that in a CI pipeline, but there are many situations where this isn't an option:

  • Cost: As many "AI wrapper" companies quickly discovered, these APIs are expensive. And running queries in a CI pipeline that could run potentially hundreds of times per day adds up quickly.
  • Security: Many organizations handling sensitive or proprietary data don't want their information sent to a third party like OpenAI or Google.

To solve these issues, I wanted to see if it's possible to run an LLM locally in a CI job, to which I can send queries without worrying about API cost or revealing sensitive data.

How it's done

Tools

All the tools I'm using in this article are free to use.

Name Description
Ollama A free, open-source tool for running LLMs locally
Gitlab CI A free CI/CD pipeline system developed by Gitlab for running automated jobs in the same environment as your git repository
GitHub Actions Same as Gitlab CI, but provided by GitHub

Note

In this article I won't be getting too deep into exactly what Ollama is and how it works. To learn more about it, check out their GitHub.

Setup

To start, you'll need either a GitHub or Gitlab account and you'll need to create your first repository12. Once that's done, create a basic CI/CD pipeline--we'll name it ci:

name: ci
on:
  push:
workflow:
  name: ci

This creates a basic structure for a pipeline that runs on all commits. To limit the pipeline to only run on a certain branch, modify GitHub's on.push option, or Gitlab's workflow:rules. For example:

name: ci
on:
  push:
    branches:
      - main
workflow:
  name: ci
  rules:
    - if: $CI_COMMIT_BRANCH == 'main'

Run an LLM in a job

The ollama CLI is great for running a local, interactive chat session in your terminal. But for a non-interactive, automated CI job it's best to interface with the Ollama API. To do this, we need to first define our ollama job and run Ollama as a service34 accessible by our job.

jobs:
  ollama:
    runs-on: ubuntu-latest
    services:
      ollama: ollama/ollama
ollama:
  services:
    - image: ollama/ollama
      alias: ollama

Next we'll add our script. When we request a response from the LLM we'll need to specify a large language model to generate that response. These models can be found in Ollama's library. Any model will work, but keep in mind that models with more parameters--while providing much better responses--are much larger in size. The 671 billion parameter version of deepseek-r1, for example, is 404GB in size. As such, it's ideal to use smaller models such as Meta's llama3.2.

Prior to generating a response, we'll first need to pull the model we want using Ollama's pull API. Then we generate the response with the generate API. Any Docker image will work for this job as long as it has the ability to send web requests with tools like wget or curl. For this example we'll be using curl with the alpine/curl image.

container: alpine/curl
steps:
  - name: Generate response
    run: |
      curl -sS -X POST -d '{"model":"llama3.2","stream":false}' ollama:11434/api/pull
      curl -sS -X POST -d '{"model":"llama3.2","stream":false,"prompt":"Hello world"}' ollama:11434/api/generate
image: alpine/curl
script: |
  curl -sS -X POST -d '{"model":"llama3.2","stream":false}' ollama:11434/api/pull
  curl -sS -X POST -d '{"model":"llama3.2","stream":false,"prompt":"Hello world"}' ollama:11434/api/generate
Note

Ideally, the pull and generate operations would run in separate steps. GitHub uses the steps functionality for this, however, the comparable functionality in Gitlab (run) is still in the experimental stage. For simplicity for the sake of this article, we'll be running the commands in a single script in both GitHub and Gitlab.

To accomplish the same in separate steps would look like this:

container: alpine/curl
steps:
  - name: Pull model
    run: curl -sS -X POST -d '{"model":"llama3.2","stream":false}' ollama:11434/api/pull

  - name: Generate response
    run: curl -sS -X POST -d '{"model":"llama3.2","stream":false,"prompt":"Hello world"}' ollama:11434/api/generate
image: alpine/curl
run:
  - name: Pull model
    script: curl -sS -X POST -d '{"model":"llama3.2","stream":false}' ollama:11434/api/pull

  - name: Generate response
    script: curl -sS -X POST -d '{"model":"llama3.2","stream":false,"prompt":"Hello world"}' ollama:11434/api/generate

That's all we need--let's see the response:

> curl -sS -X POST -d '{"model":"llama3.2","stream":false}' ollama:11434/api/pull
{"status":"success"}
> curl -sS -X POST -d '{"model":"llama3.2","stream":false,"prompt":"Hello world"}' ollama:11434/api/generate
{"model":"llama3.2","created_at":"2025-02-06T18:46:52.362892453Z","response":"Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?","done":true,"done_reason":"stop","context":[128004,9125,128007,276,39766,3303,33025,2696,22,8790,220,2366,11,271,128009,128006,882,128007,271,9906,1917,128009,128006,78191,128007,271,9906,0,1102,596,6555,311,3449,499,13,2209,1070,2555,358,649,1520,499,449,477,1053,499,1093,311,6369,30],"total_duration":9728821911,"load_duration":2319403269,"prompt_eval_count":27,"prompt_eval_duration":3406000000,"eval_count":25,"eval_duration":4001000000}

Parse the output

This is great, but the JSON output is a bit verbose. We can simplify the response and make it a bit more readable using the jq command.

steps:
  - name: Install jq
    run: apk add jq
  - name: Generate response
    run: |
      curl -sS -X POST -d '{"model":"llama3.2","stream":false}' ollama:11434/api/pull | jq -r .status
      curl -sS -X POST -d '{"model":"llama3.2","stream":false,"prompt":"Hello world"}' ollama:11434/api/generate | jq -r .response
before_script: apk add jq
script: |
  curl -sS -X POST -d '{"model":"llama3.2","stream":false}' ollama:11434/api/pull | jq -r .status
  curl -sS -X POST -d '{"model":"llama3.2","stream":false,"prompt":"Hello world"}' ollama:11434/api/generate | jq -r .response

This looks much better:

> curl -sS -X POST -d '{"model":"llama3.2","stream":false}' ollama:11434/api/pull | jq -r .status
success
> curl -sS -X POST -d '{"model":"llama3.2","stream":false,"prompt":"Hello world"}' ollama:11434/api/generate | jq -r .response
Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?

Put it all together

This is our final product:

name: ci
on:
  push:

jobs:
  ollama:
    runs-on: ubuntu-latest
    services:
      ollama: ollama/ollama
    container: alpine/curl
    steps:
      - name: Install jq
        run: apk add jq
      - name: Generate response
        run: |
          curl -sS -X POST -d '{"model":"llama3.2","stream":false}' ollama:11434/api/pull | jq -r .status
          curl -sS -X POST -d '{"model":"llama3.2","stream":false,"prompt":"Hello world"}' ollama:11434/api/generate | jq -r .response
workflow:
  name: ci

ollama:
  image: alpine/curl
  services:
    - name: ollama/ollama
      alias: ollama
  before_script: apk add jq
  script: |
    curl -sS -X POST -d '{"model":"llama3.2","stream":false}' ollama:11434/api/pull | jq -r .status
    curl -sS -X POST -d '{"model":"llama3.2","stream":false,"prompt":"Hello world"}' ollama:11434/api/generate | jq -r .response

Summary

With just a few lines of code, we're able to run an Ollama server, pull down a large language model, and generate responses--all completely local to our CI job. We can now use this capability to generate release notes, automate code review, write documentation--the possibilities are endless.