Monday 9 October 2017

Rrdtool Mobile Media


Sto lavorando con una grande quantità di serie temporali. Tali serie sono sostanzialmente misurazioni rete provenienti ogni 10 minuti, e alcuni di loro sono periodiche (cioè la larghezza di banda), mentre altre Arent (cioè la quantità di traffico di instradamento). Vorrei un semplice algoritmo per fare un rilevamento di valori erratici on-line. Fondamentalmente, voglio mantenere in memoria (o su disco) l'intero dati storici per ogni serie temporale, e voglio rilevare qualsiasi valore anomalo in uno scenario vivo (ogni volta che un nuovo campione viene catturato). Qual è il modo migliore per raggiungere questi risultati Im attualmente utilizzando una media mobile al fine di rimuovere il rumore, ma poi cosa prossimi Cose semplici come deviazione standard, pazza. contro l'intero insieme di dati doesnt funziona bene (non posso presumo il tempo della serie sono fermi), e vorrei qualcosa di più preciso, idealmente una scatola nera come: doppia outlierdetection (doppia vettore, valore doppio) dove vettore è la matrice del doppio contenente i dati storici, e il valore di ritorno è il punteggio di anomalia per il nuovo valore del campione. chiesto 2 agosto 10 alle 20:37 Sì, ho assunto la frequenza è nota e specificato. Ci sono metodi per stimare la frequenza automaticamente, ma che complicherebbe la funzione considerevolmente. Se è necessario stimare la frequenza, provate a chiedere una domanda separata su di esso - e I39ll probabilmente fornire una risposta, ma ha bisogno di più spazio di quello che ho a disposizione in un commento. ndash Rob Hyndman 3 agosto 10 alle 23:40 Una buona soluzione avrà diversi ingredienti, tra cui: Utilizzare un resistente, finestra mobile liscio per rimuovere non stazionarietà. Riesprimere i dati originali in modo che i residui rispetto al corretto sono circa simmetricamente distribuite. Data la natura dei dati, la sua probabile che le loro radici quadrate o logaritmi darebbero residui simmetrici. Applicare metodi di controllo grafico, o almeno di controllo grafico pensiero, per i residui. Per quanto riguarda l'ultima si va, carta di controllo pensiero dimostra che le soglie convenzionali come 2 SD o 1,5 volte la IQR oltre i quartili funzionano male perché fanno scattare troppi falsi segnali out-of-controllo. La gente di solito usano 3 SD sotto controllo il lavoro grafico, dove 2.5 (o anche 3) volte il IQR oltre i quartili sarebbe un buon punto di partenza. Ho più o meno delineato la natura della soluzione Rob Hyndmans mentre l'aggiunta ad esso due punti principali: la potenziale necessità di ri-esprimere i dati e la saggezza di essere più conservatore nel segnalare un outlier. Non sono sicuro che Loess è buono per un rilevatore on-line, però, perché doesnt funzionare bene alle estremità. Si potrebbe invece utilizzare qualcosa di semplice come un filtro mediano in movimento (come in lisciatura resistenti Tukeys). Se i valori anomali Dont Come a raffica, è possibile utilizzare una stretta finestra (5 punti di dati, forse, che si rompe solo con una raffica di 3 o più valori anomali all'interno di un gruppo di 5). Dopo aver eseguito l'analisi per determinare un buon ri-espressione dei dati, la sua youll improbabile bisogno di cambiare la ri-espressione. Pertanto, il rilevatore on-line in realtà ha solo bisogno di fare riferimento ai valori più recenti (l'ultima finestra) perché non ci vorrà utilizzare i dati precedenti a tutti. Se si dispone di serie storiche molto lungo si potrebbe andare oltre per analizzare autocorrelazione e stagionalità (come ricorrenti giornaliera o fluttuazioni settimanali) per migliorare la procedura. risposto 26 agosto 10 alla 18:02 John, 1.5 IQR è Tukey39s raccomandazione originale per i baffi più lunghi su un grafico a scatole e 3 IQR è la sua raccomandazione per la marcatura punti quotfar outliersquot (un riff su un popolare frase 6039s). Questo è integrata in molti algoritmi grafico a scatole. La raccomandazione viene analizzato teoricamente in Hoaglin, Mosteller, amplificatore Tukey, comprensione robusta e analisi esplorativa dei dati. ndash w Huber 9830 9 ottobre 12 a 21:38 Questo conferma i dati di serie temporali ho cercato di analizzare. media finestra e anche una deviazione standard della finestra. ((X - media) sd) GT 3 sembrano essere i punti che voglio segnala come valori anomali. Beh, almeno avvertire come valori anomali, segnalo qualcosa di superiore al 10 SD come valori anomali di errore estreme. Il problema che ho incontrato è ciò che è una lunghezza vetrina ideale I39m giocare con nulla tra 4-8 punti di dati. ndash NeoZenith 29 giugno 16 alle 8:00 Neo La cosa migliore potrebbe essere quella di sperimentare un sottoinsieme di dati e di confermare le conclusioni con i test sul resto. Si potrebbe effettuare una convalida incrociata più formale, anche (ma è richiesta particolare attenzione con i dati di serie temporali a causa della interdipendenza di tutti i valori). ndash w Huber 9830 29 Giugno 16 anni al 00:10 (Questa risposta è risposto a un duplicato (ora chiuso) domanda a rilevare eventi eccezionali. che ha presentato alcuni dati in forma grafica.) rilevamento di valori erratici dipende dalla natura dei dati e su ciò che si sono disposti ad assumere su di loro. metodi di impiego generale si basano su statistiche affidabili. Lo spirito di questo approccio è quello di caratterizzare la maggior parte dei dati in un modo che non è influenzato da eventuali valori anomali e quindi selezionare i valori individuali che non rientrano nell'ambito di tale caratterizzazione. Poiché si tratta di una serie storica, si aggiunge la complicazione di dover (ri) scoprire i valori anomali su base continuativa. Se questo deve essere fatto come la serie si sviluppa, quindi ci sono solo permesso di utilizzare i dati meno recenti per la rilevazione, non dati futuri Inoltre, come protezione contro le molte prove ripetute, vorremmo usare un metodo che ha un bassissimo falso tasso positivo. Queste considerazioni suggeriscono l'esecuzione di un semplice, robusta prova outlier finestra mobile sui dati. Ci sono molte possibilità, ma un semplice, facilmente comprensibile e di facile applicazione è basato su un MAD esecuzione: deviazione assoluta mediana dalla mediana. Questa è una misura fortemente di variazione robusto all'interno dei dati, simile a una deviazione standard. Un picco periferico sarebbe diverse organizzazioni sanitarie o più superiore alla mediana. C'è ancora un po 'di messa a punto da fare. quanto di una deviazione dalla maggior parte dei dati dovrebbe essere considerato periferico e quanto indietro nel tempo dovrebbe uno sguardo Lascia lasciare questi come parametri per la sperimentazione. Ecco una implementazione R applicata ai dati x (1,2, ldots, n) (con n1150 di emulare i dati) con i corrispondenti valori y: Applicato ad un insieme di dati come la curva rossa illustrata nella questione, che produce questo risultato: i dati sono mostrati in rosso, la finestra di 30 giorni di soglie median5MAD in grigio, e le aberranti - che sono semplicemente quei valori di dati al di sopra della curva di colore grigio - in nero. (La soglia può essere calcolato solo a partire dalla fine della finestra iniziale Per tutti i dati all'interno di questa finestra iniziale, la prima soglia viene utilizzato:.. Ecco perché la curva grigia è piatta tra x0 e x30) Gli effetti della modifica dei parametri sono (a) aumentando il valore della finestra tenderà a smussare la curva grigia e (b) aumentare la soglia solleverà la curva grigia. Sapendo questo, si può prendere un segmento iniziale dei dati e rapidamente identificare i valori dei parametri che meglio segregano i picchi periferiche dal resto dei dati. Applicare questi valori dei parametri per controllare il resto dei dati. Se un grafico mostra il metodo peggiora nel tempo, significa che la natura dei dati stanno cambiando ed i parametri potrebbe essere necessario ri-sintonizzazione. Si noti quanto poco questo metodo presuppone sui dati: essi non devono essere distribuiti normalmente non hanno bisogno di esibire alcuna periodicità essi non hanno nemmeno bisogno di essere non negativo. Tutto assume è che i dati si comportano in modo ragionevolmente simili nel tempo e che i picchi periferiche sono visibilmente superiore rispetto al resto dei dati. Se qualcuno volesse sperimentare (o confrontare qualche altra soluzione a quello offerto qui), ecco il codice che ho usato per la produzione di dati come quelli indicati nella domanda. Sto indovinando modello di serie sofisticato tempo non funziona per voi a causa del tempo necessario per rilevare valori anomali utilizzando questa metodologia. Quindi, ecco una soluzione: in primo luogo stabilire una base schemi di traffico normali per un anno sulla base di analisi manuale dei dati storici che rappresenta il momento della giornata, giorno della settimana vs fine settimana, i mesi dell'anno, ecc Utilizzare questa linea di base insieme ad alcuni semplice meccanismo (ad esempio media mobile suggerito da Carlos) per rilevare valori anomali. Si consiglia inoltre di rivedere la letteratura controllo statistico di processo per alcune idee. Sì, questo è esattamente ciò che sto facendo: fino ad ora ho diviso manualmente il segnale in periodi, in modo che per ognuno di loro posso definire un intervallo di confidenza all'interno del quale il segnale dovrebbe essere fisso, e quindi posso usare metodi standard tali come deviazione standard. Il vero problema è che non riesco a decidere il modello atteso per tutti i segnali che devo analizzare e that39s perché I39m alla ricerca di qualcosa di più intelligente. ndash gianluca 2 agosto 10 alle 21.37 Ecco una sola idea: Fase 1: Implementare e stimare un modello Time Series generico su una base una volta sulla base dei dati storici. Questo può essere fatto in linea. Fase 2: utilizzare il modello risultante per individuare valori anomali. Fase 3: Ad una certa frequenza (forse ogni mese), ri-calibrare il modello di serie storica (questo può essere fatto in linea) in modo che il rilevamento punto 2 di valori anomali non va troppo al passo con i modelli di traffico attuali. Vorrei che il lavoro per il contesto ndash user28 2 agosto 10 alle 22:24 Sì, questo potrebbe funzionare. Stavo pensando a un approccio simile (ricalcolare la linea di base ogni settimana, che può essere la CPU se si dispone di centinaia di serie temporali univariata per analizzare). BTW la vera domanda è difficile quotwhat è il miglior algoritmo di Blackbox stile per la modellazione di un segnale del tutto generico, considerando il rumore, la stima tendenza e seasonalityquot. Per quanto ne so, ogni approccio in letteratura richiede una fase quotparameter tuningquot davvero difficile, e l'unico metodo automatico che ho trovato è un modello ARIMA da Hyndman (robjhyndmansoftwareforecast). Mi sto perdendo qualcosa ndash gianluca 2 agosto 10 alle 22:38 Ancora una volta, questo funziona abbastanza bene se si suppone il segnale di avere una stagionalità del genere, ma se uso una serie storica completamente diversa (cioè il tempo di andata e ritorno TCP media nel corso del tempo ), questo metodo non funziona (poiché sarebbe meglio gestire quella con una semplice media globale e deviazione standard utilizzando una finestra scorrevole contenente dati storici). ndash gianluca 2 agosto 10 a 22:02 A meno che non si è disposti a implementare un modello di serie storica generale (che porta nei suoi svantaggi in termini di latenza, ecc) Sono pessimista che troverete un'implementazione generale che allo stesso tempo è abbastanza semplice a lavorare per tutti i tipi di serie temporali. ndash user28 2 10 agosto a 22:06 Un altro commento: So che una buona risposta potrebbe essere quotso si potrebbe stimare la periodicità del segnale, e decidere l'algoritmo da utilizzare in base alle itquot, ma ho didn39t trovare una vera buona soluzione a questo altro problema (ho giocato un po 'con l'analisi spettrale utilizzando DFT e analisi in tempo utilizzando la funzione di autocorrelazione, ma la mia serie temporali contengono un sacco di rumore e tali metodi danno alcuni risultati folli mosts del tempo) ndash Gianluca 2 agosto 10 alle 22:06 a commento al tuo ultimo commento: that39s perché I39m alla ricerca di un approccio più generico, ma ho bisogno di una sorta di quotblack boxquot perché ho can39t fare qualsiasi ipotesi circa il segnale analizzato, e quindi mi can39t creare il parametro quotbest impostato per il algorithmquot apprendimento. ndash gianluca 2 agosto 10 a 22:09 Dal momento che è un dato di serie temporali, un semplice en. wikipedia. orgwikiExponentialsmoothing filtro esponenziale lisciare i dati. E 'un buon filtro in quanto non avete bisogno di accumulare i vecchi punti di dati. Confronta ogni valore dati appena lisciato con il suo valore non livellato. Una volta che la deviazione supera una certa soglia predefinita (a seconda di ciò che si crede un outlier in dati), allora il vostro outlier possono essere facilmente individuati. risposto 30 Apr 15 alla 08:50 Si potrebbe utilizzare la deviazione standard delle ultime misurazioni N (è necessario scegliere un adeguato N). Un buon punteggio anomalia sarebbe quanti deviazioni standard una misura è dalla media mobile. ha risposto 2 agosto 10 a 20:48 Grazie per la risposta, ma cosa succede se il segnale presenta una elevata stagionalità (vale a dire un sacco di misure di rete sono caratterizzati da uno schema giornaliero e settimanale, allo stesso tempo, ad esempio di notte vs giorno o week-end vs giorni lavorativi) Un approccio basato sulla deviazione standard non funziona in questo caso. ndash gianluca 2 agosto 10 a 20:57 Per esempio, se ho un nuovo campione ogni 10 minuti, e I39m fare un rilevamento di valori erratici del consumo di banda della rete di un'azienda, in fondo a 6pm questa misura cadrà (si tratta di un previsto un modello totalmente normale), e una deviazione standard calcolato su una finestra scorrevole fallirà (perché attiverà un avviso di sicuro). Allo stesso tempo, se la misura cade a 4pm (deviando dal solito basale), questo è un vero outlier. ndash gianluca 2 agosto 10 at 20:58 quello che faccio è il gruppo delle misure di ora e giorno della settimana e confrontare deviazioni standard di questo. Ancora doesnt corretto per cose come le vacanze e EstateInverno stagionalità, ma la sua corretta la maggior parte del tempo. Lo svantaggio è che si ha realmente bisogno di raccogliere un anno o giù di lì di dati da avere abbastanza in modo che stdDev comincia ad avere senso. Analisi spettrale rileva la periodicità in serie tempo di sosta. L'approccio dominio della frequenza sulla base di stima della densità spettrale è un approccio mi sento di raccomandare come il primo passo. Se per certi periodi irregolarità significa un picco molto superiore è tipico per quel periodo poi la serie con tali irregolarità non sarebbe fermo e anlsysis spettrale non sarebbe appropriata. Ma supponendo che avete identificato il periodo che ha le irregolarità si dovrebbe essere in grado di determinare circa quale sarebbe la normale altezza di picco e quindi in grado di impostare una soglia a un certo livello superiore a quello medio per designare i casi irregolari. Risposi 3 settembre 12 a 14:59 suggerisco lo schema qui sotto, che dovrebbe essere implementabile in un giorno o due: raccogliere il maggior numero di campioni, come si può tenere in memoria Rimuovere valori anomali evidenti utilizzando la deviazione standard per ogni attributo calcolare e memorizzare la matrice di correlazione e anche la media di ciascun attributo calcolare e memorizzare le distanze di Mahalanobis di tutti i vostri campioni Calcolo outlierness: per il singolo campione di cui si vuole conoscere la sua outlierness: Recuperare i mezzi, matrice di covarianza e Mahalanobis distanza s dalla formazione Calcolare la Mahalanobis distanza d per il campione Rientro percentile in cui cade d (usando le distanze di Mahalanobis dalla formazione) che sarà il tuo punteggio di valori anomali: 100 è un outlier estrema. PS. Nel calcolo della distanza di Mahalanobis. usare la matrice di correlazione, non la matrice di covarianza. Questo è più robusto se le misure del campione variano in unità e number. Graphite 1 esegue due attività piuttosto semplice: memorizzazione dei numeri che cambiano nel tempo e rappresentazione grafica. C'è stato un sacco di software scritto nel corso degli anni di fare queste stesse attività. Ciò che rende Graphite unica è che fornisce questa funzionalità come un servizio di rete che sia facile da usare e altamente scalabile. Il protocollo per l'alimentazione di dati in grafite è abbastanza semplice che si potrebbe imparare a farlo a mano in pochi minuti (non che youd realmente vogliono, ma è un banco di prova decente per semplicità). Rendering grafici e il recupero punti dati sono facili come andare a prendere un URL. Questo rende molto naturale per integrare Grafite con altri software e consente agli utenti di creare applicazioni potenti in cima a grafite. Uno degli usi più comuni di grafite è la costruzione di cruscotti web-based per il monitoraggio e l'analisi. Grafite è nato in un ambiente di e-commerce ad alto volume e il suo design riflette questo. Scalabilità e accesso in tempo reale ai dati sono obiettivi chiave. I componenti che permettono di grafite per raggiungere questi obiettivi includono una biblioteca specializzata di database e il suo formato di memorizzazione, un meccanismo di caching per ottimizzare le operazioni di IO, e un metodo semplice ma efficace per il clustering dei server di grafite. Piuttosto che semplicemente descrivere come funziona Grafite oggi, vi spiegherò come grafite è stato inizialmente implementato (ingenuamente), quali problemi ho incontrato, e come ho ideato le soluzioni a loro. 7.1. La Biblioteca Database: Memorizzazione Time-Series Data grafite è scritto interamente in Python ed è costituito da tre componenti principali: una libreria di database denominato sussurro. un demone back-end chiamato carbonio. e una webapp front-end che rende i grafici e fornisce un'interfaccia utente di base. Mentre whisper stato scritto appositamente per Graphite, può anche essere utilizzato in modo indipendente. E 'molto simile nel design al-robin-database di turno usato da RRDtool, e memorizza solo i dati numerici di serie temporali. Di solito pensiamo a banche dati come processi server che le applicazioni client parlare con i socket. Tuttavia, sussurrare. molto simile RRDtool, è una libreria database utilizzato dalle applicazioni per manipolare e recuperare i dati archiviati in file in formato speciale. Le operazioni di sussurrano più elementari vengono a creare per creare un nuovo file di sussurro, aggiornamento per scrivere nuovi punti di dati in un file, e recuperare per recuperare i punti dati. Figura 7.1: Anatomia di base di un file sussurro Come mostrato nella Figura 7.1. file whisper costituiti da una sezione di intestazione contenente vari metadati, seguito da una o più sezioni di archivio. Ogni archivio è una sequenza di punti di dati consecutivi che sono (timestamp, valore) coppie. Quando si esegue un aggiornamento o di andare a prendere il funzionamento, sussurro determina l'offset nel file in cui i dati devono essere scritti o letti da, in base alla data e ora e la configurazione dell'archivio. 7.2. Il back-end: A Simple Storage Service grafiti back-end è un processo demone chiamato carbonio-cache. di solito indicato semplicemente come carbonio. È costruito su Twisted, un quadro IO event-driven altamente scalabile per Python. Ritorto consente carbonio per parlare in modo efficiente per un gran numero di clienti e gestire una grande quantità di traffico con basso overhead. Figura 7.2 mostra il flusso di dati tra carbonio. sussurrano e le webapp: Le applicazioni client raccolgono i dati e li inviano alla grafite back-end, carbonio. che memorizza i dati utilizzando sussurro. Questi dati possono essere utilizzati dalla webapp Graphite per generare grafici. Figura 7.2: Flusso di dati La funzione primaria di carbonio è di memorizzare i punti di dati per le metriche fornite dai clienti. Nella terminologia grafite, una metrica è qualsiasi quantità misurabile che può variare nel tempo (come l'utilizzo della CPU di un server o il numero di vendite di un prodotto). Un punto di dati è semplicemente un (timestamp, value) coppia corrispondente al valore misurato di un particolare parametro in un punto nel tempo. Le metriche sono identificati in modo univoco con il loro nome, e il nome di ogni metrica così come i suoi punti di dati sono forniti da applicazioni client. Un tipo comune di applicazione client è un agente di monitoraggio che raccoglie sistema o di applicazioni metriche, e invia i suoi valori raccolti al carbonio per una facile memorizzazione e la visualizzazione. Metriche in grafite hanno semplici nomi gerarchici, simili a filesystem percorsi tranne che un punto è utilizzato per delimitare la gerarchia, piuttosto che una barra o barra rovesciata. carbonio rispetterà qualsiasi nome legale e crea un file sussurro per ogni metrica per archiviare i propri punti di dati. I file sussurrano sono memorizzati all'interno della directory dei dati di carbonio s in una gerarchia del file system che rispecchia la gerarchia delimitata da punti in ogni nome di metriche, in modo che (per esempio) le mappe servers. www01.cpuUsage a hellipserverswww01cpuUsage. wsp. Quando un'applicazione client desidera inviare i punti dati alla grafite è necessario stabilire una connessione TCP al carbonio. di solito sulla porta 2003 2. Il cliente non tutto il carbonio a parlare non invia nulla attraverso la connessione. Il client invia punti di dati in un semplice formato di testo semplice, mentre la connessione può essere lasciata aperta e ri-utilizzato come necessario. Il formato è una riga di testo per ogni punto di dati in cui ogni riga contiene il nome tratteggiata metrica, il valore, e un timestamp Unix epoch separati da spazi. Ad esempio, un cliente potrebbe inviare: Ad un livello elevato, tutto il carbonio non è ascoltare per i dati in questo formato e cercare di memorizzarlo su disco il più rapidamente possibile utilizzando sussurro. In seguito discuteremo i dettagli di alcuni trucchi utilizzati per garantire la scalabilità e ottenere le migliori prestazioni che possiamo fuori di un tipico hard disk. 7.3. Il frontale: Grafici On-Demand La webapp Graphite consente agli utenti di richiedere grafici personalizzati con una semplice API basato su URL. Rappresentazione grafica dei parametri sono specificati nella query string di una richiesta HTTP GET, e un'immagine PNG viene restituito in risposta. Ad esempio, l'URL: richiede un grafico 500times300 per la servers. www01.cpuUsage metrica e le ultime 24 ore di dati. In realtà, è richiesto solo il parametro target tutti gli altri possono e utilizzare i valori di default se omesso. Grafite supporta una vasta gamma di opzioni di visualizzazione, nonché funzioni di manipolazione dei dati che seguono una sintassi semplice funzionale. Ad esempio, potremmo rappresentare graficamente una media mobile a 10-punto della metrica nel nostro esempio precedente come questo: Le funzioni possono essere annidate, consentendo espressioni complesse e calcoli. Ecco un altro esempio che dà il totale parziale delle vendite per il giorno utilizzando metriche per-prodotto di vendita per minuto: La funzione sumSeries calcola un tempo-serie che è la somma di ogni metrica che corrisponde al modello products..salesPerMinute. Poi integrale calcola un totale parziale piuttosto che un conteggio al minuto. Da qui si mangia troppo difficile immaginare come si possa costruire una interfaccia web per la visualizzazione e la manipolazione di grafici. Grafite dotato di una propria interfaccia utente compositore, mostrato nella Figura 7.3. che fa questo utilizzando Javascript per modificare i parametri grafici URL come l'utente fa clic tra i menu delle funzioni disponibili. Figura 7.3: Interfaccia Graphites Composer 7.4. Dashboard Fin dalla sua nascita Grafite è stato utilizzato come strumento per la creazione di cruscotti web-based. L'API URL rende questo un caso d'uso naturale. Fare un cruscotto è semplice come fare una pagina HTML completa di tag come questo: Tuttavia, non tutti piace lavorazione URL a mano, in modo da Graphites Composer interfaccia utente fornisce un metodo point-and-click per creare un grafico da cui si può semplicemente copiare e incollare l'URL. Una volta accoppiato con un altro strumento che permette una rapida creazione di pagine web (come un wiki) questo diventa abbastanza facile che gli utenti non tecnici possono costruire i propri cruscotti abbastanza facilmente. 7.5. Un collo di bottiglia evidente una volta che i miei utenti hanno iniziato cruscotti costruzione, Graphite rapidamente ha cominciato ad avere problemi di prestazioni. Ho studiato i log del server web per vedere quali richieste sono state bogging verso il basso. Era abbastanza evidente che il problema era il gran numero di richieste di rappresentazione grafica. La webapp era CPU-bound, il rendering grafici costantemente. Ho notato che c'erano un sacco di richieste identiche, ei cruscotti erano da biasimare. Immaginate di avere un cruscotto con 10 grafici in esso e la pagina verrà aggiornata una volta al minuto. Ogni volta che un utente apre il cruscotto nel browser, grafite deve gestire più di 10 richieste al minuto. Questo diventa rapidamente costoso. Una soluzione semplice è quello di rendere ogni grafico sola volta e poi servire una copia a ciascun utente. Il framework web Django (quali grafite è costruito su) fornisce un meccanismo di caching eccellente che può utilizzare vari back end, come memcached. Memcached 3 è essenzialmente una tabella hash fornito come un servizio di rete. Le applicazioni client possono ottenere e impostare coppie chiave-valore, proprio come una tabella hash ordinaria. Il vantaggio principale di utilizzare memcached è che il risultato di una richiesta costoso (come il rendering grafico) può essere memorizzato molto rapidamente e recuperato successivamente per gestire le richieste successive. Per evitare di restituire gli stessi grafici stantii per sempre, memcached può essere configurato per scadere i grafici memorizzati nella cache dopo un breve periodo. Anche se si tratta solo di pochi secondi, l'onere decolla grafite è enorme perché le richieste di duplicati sono così comuni. Un altro caso comune che crea un sacco di rendere le richieste è quando un utente sta modificando il opzioni di visualizzazione e applicando funzioni nell'interfaccia utente Composer. Ogni volta che l'utente cambia qualcosa, grafite deve ridisegnare il grafico. Gli stessi dati è coinvolto in ogni richiesta quindi ha senso mettere i dati sottostanti in memcache pure. Ciò mantiene l'interfaccia utente sensibile all'utente perché la fase di recupero dei dati viene saltata. 7.6. Ottimizzazione IO Immaginate di avere 60.000 metriche che vengono inviati al server di grafite, e ciascuno di questi parametri ha un punto di dati al minuto. Ricordate che ogni metrica ha il suo file sussurro sul filesystem. Questo significa carbonio deve eseguire una operazione di scrittura a 60.000 file diversi ogni minuto. Finché di carbonio in grado di scrivere un file ogni millisecondo, dovrebbe essere in grado di tenere il passo. Questo non è troppo inverosimile, ma supponiamo di avere 600.000 metriche aggiornamento ogni minuto, o le metriche sono l'aggiornamento ogni secondo, o forse semplicemente non può permettersi di stoccaggio abbastanza veloce. Qualunque sia il caso, assumere il tasso di punti di dati in entrata supera il tasso di operazioni di scrittura che il vostro storage può tenere il passo con. Come dovrebbe questa situazione più essere gestita con i dischi rigidi in questi giorni hanno lento tempo di ricerca 4. che è, il ritardo tra il fare operazioni di IO in due sedi diverse, rispetto alla scrittura di una sequenza contigua di dati. Ciò significa che la scrittura più contigui che facciamo, più produttività si ottiene. Ma se abbiamo migliaia di file che devono essere scritti di frequente, e ogni processo di scrittura è molto piccolo (un punto dati sussurro si trova a soli 12 byte), allora i nostri dischi sono assolutamente intenzione di trascorrere la maggior parte del loro tempo alla ricerca. Lavorando sotto l'ipotesi che il tasso di operazioni di scrittura ha un soffitto relativamente basso, l'unico modo per aumentare il nostro rendimento punto dati oltre tale tasso è quello di scrivere più punti di dati in una singola operazione di scrittura. Questo è fattibile perché sussurro organizza punti di dati consecutivi contiguo sul disco. Così ho aggiunto una funzione updatemany a sussurrare. che prende una lista di punti dati per una singola metrica e compatta punti di dati contigui in una singola operazione di scrittura. Anche se questo fatto ogni scrittura grande, la differenza di tempo necessario per scrivere dieci punti di dati (120 byte) contro un punto di dati (12 byte) è trascurabile. Ci vuole un bel paio di punti di dati prima della dimensione di ogni scrittura inizia a influenzare sensibilmente la latenza. Poi ho implementato un meccanismo di buffer in carbonio. Ogni punto di dati in entrata viene mappato a una coda in base al suo nome metrica e viene poi aggiunto a quella coda. Un altro thread ripetutamente scorre tutte le code e per ognuno tira tutti i dati rileva e li scrive nel file sussurro appropriata con updatemany. Tornando al nostro esempio, se abbiamo 600.000 metriche di aggiornamento ogni minuto e la nostra stoccaggio in grado di tenere il passo con solo 1 scrittura per millisecondo, poi le code finiranno tenendo circa 10 punti di dati ciascuno in media. L'unica risorsa questo ci costa è la memoria, che è relativamente abbondante dal momento che ogni punto di dati è solo pochi byte. Questa strategia buffer in modo dinamico, come molti datapoints se necessario per sostenere un tasso di datapoints in entrata che possono superare la velocità delle operazioni di IO storage in grado di tenere il passo con. Un bel vantaggio di questo approccio è che aggiunge un grado di resilienza per gestire rallentamenti IO temporanei. Se il sistema ha bisogno di fare altro lavoro IO al di fuori di grafite, allora è probabile che il tasso di operazioni di scrittura diminuirà, in cui le code caso carbonio s semplicemente crescere. Più grandi sono le code, maggiore è la scrittura. Poiché il throughput complessivo dei punti di dati è uguale al tasso di operazioni di scrittura volte la dimensione media di ciascun scrivere, carbonio è in grado di mantenere finché c'è abbastanza memoria per le code. meccanismo di accodamento carbonio s è mostrata in figura 7.4. Figura 7.4: Carboni Queueing meccanismo 7.7. Keeping It Real-Time punti dati buffer è stato un bel modo per ottimizzare carbonio s IO ma ci volle molto per i miei utenti a notare un effetto collaterale piuttosto preoccupante. Rivisitare ancora una volta il nostro esempio, weve ha ottenuto 600.000 metriche che si aggiornano ogni minuto e sono stati assumendo la nostra stoccaggio non può che tenere il passo con 60.000 operazioni di scrittura al minuto. Questo significa che dovremo circa 10 minuti di dati seduti in coda in carbonio s in un dato momento. Per un utente questo significa che i grafici hanno richiesta del webapp Graphite mancheranno le più recenti 10 minuti di dati: Non buona fortuna la soluzione è abbastanza straight-forward. Ho semplicemente aggiunto un ascoltatore presa di carbonio che fornisce un'interfaccia di query per accedere ai punti di dati nel buffer e quindi modifica la webapp Graphite per utilizzare questa interfaccia ogni volta che ha bisogno di recuperare i dati. La webapp unisce quindi i punti di dati recupera in carbonio con i punti di dati è recuperato dal disco e voilà, i grafici sono in tempo reale. Certo, nel nostro esempio i punti di dati vengono aggiornati al minuto e quindi non esattamente in tempo reale, ma il fatto che ciascun punto di dati è immediatamente accessibile in un grafico è ricevuto dal carbonio è in tempo reale. 7.8. Noccioli, cache, e guasti catastrofici Come è probabilmente ovvio, ormai, una caratteristica chiave delle prestazioni del sistema che Graphites propria prestazione dipende è la latenza IO. Finora weve assunto il nostro sistema ha costantemente bassa latenza IO una media di circa 1 millisecondo per scrivere, ma questo è un grande presupposto che richiede un po 'un'analisi più approfondita. La maggior parte dei dischi rigidi semplicemente arent che veloce anche con decine di dischi in un array RAID non vi è molto probabile che sia più di 1 millisecondo di latenza per l'accesso casuale. Eppure, se si dovesse provare e testare quanto velocemente anche un vecchio computer portatile potrebbe scrivere un intero kilobyte su disco si dovrebbe trovare che la chiamata di sistema di scrittura torna in molto meno di 1 millisecondo. Perché Ogni volta che il software ha caratteristiche di performance inconsistenti o imprevisti, di solito o buffering o la memorizzazione nella cache è la colpa. In questo caso, sono stati trattare con entrambi. La chiamata di sistema write doesnt tecnicamente scrivere i dati su disco, si mette semplicemente in un buffer che il kernel scrive poi sul disco in seguito. Questo è il motivo per cui la chiamata di scrittura di solito ritorna così in fretta. Anche dopo che il buffer è stato scritto su disco, rimane spesso nella cache per la successiva legge. Entrambi questi comportamenti, buffering e la memorizzazione nella cache, richiedono la memoria naturalmente. sviluppatori del kernel, essendo le persone intelligenti che sono, ha deciso che sarebbe stata una buona idea quella di utilizzare qualsiasi memoria utente-spazio è attualmente gratuito, invece di allocazione di memoria a titolo definitivo. Questo risulta essere un estremamente utile richiamo prestazioni e spiega anche perché non importa quanta memoria si aggiunge a un sistema che di solito finiscono per avere quasi zero memoria libera dopo aver fatto una modesta quantità di IO. Se le applicazioni user-space arent utilizzando che la memoria che il kernel probabilmente è. Lo svantaggio di questo approccio è che questa memoria possa essere tolto dal kernel momento in cui un applicazione user-space decide che ha bisogno di allocare più memoria per se stesso. Il kernel non ha altra scelta che rinunciare ad essa, perdendo qualunque buffer potrebbe essere stato lì. Così che cosa tutto questo significa per Grafite Abbiamo appena messo in evidenza la dipendenza carbonio s in modo coerente a bassa latenza IO e sappiamo anche che la chiamata di sistema di scrittura restituisce solo in fretta, perché i dati vengono semplicemente essere copiato in un buffer. Cosa succede quando non c'è abbastanza memoria per il kernel di continuare il buffering scrive Le scritture diventano sincrono e quindi terribilmente lento Questo provoca un drammatico calo del tasso di operazioni di scrittura di carbonio s, che fa sì che le code di carbonio s di crescere, che si nutre ancora di più la memoria, il kernel di fame ancora di più. Alla fine, questo tipo di situazione di solito si traduce in carbonio a corto di memoria o di essere ucciso da un amministratore di sistema arrabbiato. Per evitare questo tipo di catastrofe, ho aggiunto diverse funzionalità per carbonio compresi i limiti configurabili su quanti punti dati possono essere messi in coda e rate-limiti su quanto velocemente possono essere eseguite varie operazioni sussurro. Queste caratteristiche possono proteggere carbonio da spirale fuori controllo e invece imporre effetti meno dure come cadere alcuni punti di dati o rifiutando di accettare più punti di dati. Tuttavia, i valori appropriati per le impostazioni sono specifiche del sistema e richiedono una discreta quantità di test per sintonizzare. Sono utili, ma non risolvere radicalmente il problema. Per questo, ben bisogno di più hardware. 7.9. Clustering Rendere più server di grafite sembrano essere un unico sistema dal punto di vista dell'utente è neanche terribilmente difficile, almeno per una implementazione naiumlve. L'interazione con l'utente webapps è costituito principalmente da due operazioni: trovare metriche e che vanno a prendere i punti dati (di solito sotto forma di un grafico). Il ritrovamento a prendere le operazioni della webapp sono nascosto in una libreria che astrae la loro attuazione dal resto del codice di base, e sono anche esposti attraverso gestori delle richieste HTTP per le chiamate remote facili. L'operazione di ricerca cerca il file system locale dei dati whisper per le cose che corrispondono a un modello specificato dall'utente, proprio come un glob filesystem come. txt corrisponde file con tale estensione. Essendo una struttura ad albero, il risultato restituito da find è una collezione di oggetti Nodo, ogni derivante sia dal ramo o foglia sottoclassi di Node. Directory corrispondono ai nodi filiali e file Whisper corrispondono ai nodi foglia. Questo strato di astrazione rende facile supportare diversi tipi di storage sottostante inclusi i file RRD 5 e file whisper gzippati. L'interfaccia Leaf definisce un metodo recuperare cui attuazione dipende dal tipo di nodo foglia. Nel caso di file whisper è semplicemente un wrapper sottile attorno alle librarys sussurrano propria funzione recupero. Quando il supporto di clustering è stata aggiunta, la funzione di ricerca è stata estesa per essere in grado di effettuare a distanza trovare chiamate tramite HTTP ad altri server di grafite specificati nella configurazione webapps. I dati del nodo contenuti nei risultati di queste chiamate HTTP ottiene avvolto come oggetti RemoteNode conformi al solito nodo. Ramo. e interfacce Leaf. Questo rende il raggruppamento trasparente al resto del codice base webapps. Il metodo di recupero per un nodo foglia remoto viene implementato come un'altra chiamata HTTP per recuperare i punti dati dal server nodi Graphite. Tutte queste chiamate vengono effettuate tra le webapps allo stesso modo in cui un cliente li avrebbe chiamati, se non con un parametro aggiuntivo che specifica che l'operazione deve essere eseguita solo a livello locale e non può essere ridistribuito in tutto il cluster. Quando il webapp è chiesto di rendere un grafico si esegue l'operazione di ricerca per individuare le metriche richiesti e chiamate recuperare su ciascuna per recuperare i loro punti di dati. Questo funziona se i dati sono nel server locale, server remoti, o entrambi. Se un server va giù, le chiamate remote timeout abbastanza rapidamente e il server è contrassegnato come essere fuori servizio per un breve periodo durante il quale saranno effettuati ulteriori chiamate a esso. Dal punto di vista dell'utente, qualsiasi dato era sul server perso mancheranno dai loro grafici, a meno che i dati vengono duplicati su un altro server nel cluster. 7.9.1. Una breve analisi di Clustering Efficienza La parte più costosa di una richiesta di grafica sta rendendo il grafico. Ogni rappresentazione viene eseguita da un singolo server in modo da aggiungere più server non effettivamente aumentare la capacità per il rendering grafici. Tuttavia, il fatto che molte richieste finiscono per distribuire le chiamate trovare per tutti gli altri server nel cluster significa che il nostro sistema di raggruppamento condivide gran parte del carico di front-end, piuttosto che disperdere esso. Quello che abbiamo ottenuto a questo punto, tuttavia, è un modo efficace per distribuire il carico di back-end, come ogni istanza carbonio funziona indipendentemente. Questo è un primo passo poiché la maggior parte delle volte il back-end è un collo di bottiglia molto prima della fine anteriore è, ma chiaramente l'anteriore non scala orizzontalmente con questo approccio. Al fine di rendere più efficace la scala estremità anteriore, il numero di chiamate find remote effettuate dal webapp deve essere ridotto. Anche in questo caso, la soluzione più semplice è di caching. Proprio come memcached è già utilizzato per i punti dati della cache e grafici resi, può anche essere utilizzato per memorizzare nella cache i risultati di richieste di ricerca. Dal momento che la posizione di metriche è molto meno propensi a cambiare di frequente, questo dovrebbe generalmente essere memorizzate nella cache più a lungo. Il trade-off di impostare il timeout di cache per i risultati find troppo tempo, però, è che nuove metriche che sono stati aggiunti alla gerarchia non possono apparire più rapidamente per l'utente. 7.9.2. Distribuzione metriche in un cluster La webapp grafite è piuttosto omogeneo in tutto un cluster, in quanto svolge lo stesso lavoro esatto su ogni server. il ruolo del carbonio s, tuttavia, può variare da server a server a seconda di quali dati si sceglie di inviare a ogni istanza. Spesso ci sono molti clienti diversi che inviano i dati al carbonio. quindi sarebbe abbastanza fastidioso per accoppiare ogni configurazione client con il layout cluster di grafite. metriche applicative possono andare a un server di carbonio, mentre metriche di business possono ottenere inviato a più server di carbonio per la ridondanza. Per semplificare la gestione di scenari di questo tipo, grafite è dotato di uno strumento aggiuntivo chiamato carbonio-relay. Il suo compito è abbastanza semplice che riceve i dati metrici da parte dei clienti esattamente come il demone Carbon Standard (che in realtà si chiama carbonio-cache), ma invece di memorizzare i dati, si applica una serie di regole per i nomi delle metriche per determinare i server cui il carbonio-cache per trasmettere i dati. Ogni regola consiste di un'espressione regolare e un elenco di server di destinazione. Per ogni punto di dati ricevuti, le regole vengono valutate in ordine e la prima regola la cui espressione regolare corrisponde al nome della metrica viene utilizzato. In questo modo tutti i clienti hanno bisogno di fare è inviare i propri dati al carbonio-relay e finirà sui server di destra. In un certo senso di carbonio-relay fornisce funzionalità di replica, anche se sarebbe più accuratamente essere chiamato ingresso duplicazione, poiché non si tratta di problemi di sincronizzazione. Se un server va giù temporaneamente, sarà mancano i punti di dati per il periodo di tempo in cui era giù ma per il resto funzionare normalmente. Ci sono script amministrativi che lasciano il controllo del processo di re-sincronizzazione nelle mani dell'amministratore di sistema. 7.10. Design Riflessioni La mia esperienza di lavoro su Grafite ha ribadito la convinzione che in me scalabilità ha ben poco a che fare con le prestazioni di basso livello, ma invece è un prodotto del disegno complessivo. Ho eseguito in molti colli di bottiglia lungo la strada, ma ogni volta che guardo per miglioramenti nel design, piuttosto che di velocità-up in termini di prestazioni. Mi è stato chiesto molte volte perché ho scritto Grafite in Python, piuttosto che Java o C, e la mia risposta è sempre che devo ancora incontrare una vera e propria necessità per le prestazioni che un altro linguaggio poteva offrire. In Knu74, Donald Knuth famoso detto che l'ottimizzazione prematura è la radice di tutti i mali. Finché si assume che il nostro codice continuerà ad evolversi in modi non banali allora tutto l'ottimizzazione 6 è in un certo senso prematura. Uno dei maggiori punti di forza e Graphites maggiori debolezze è il fatto che molto poco di esso è stato effettivamente progettato in senso tradizionale. In linea di grafite si è evoluta gradualmente, ostacolo da ostacolo, come i problemi sorti. Molte volte gli ostacoli erano prevedibili e varie soluzioni preventive sembrava naturale. Tuttavia può essere utile per evitare la soluzione di problemi in realtà non avete ancora, anche se sembra probabile che presto sarà. Il motivo è che si può imparare molto di più da studiare da vicino i guasti attuali che da teorizzare sulle strategie di qualità superiore. Problem solving è guidato da entrambi i dati empirici che abbiamo a portata di mano e la nostra conoscenza e l'intuizione. Ive ha trovato che dubitare la vostra saggezza sufficientemente può costringere a guardare il vostro dati empirici più a fondo. Per esempio, quando ho scritto sussurro ero convinto che avrebbe dovuto essere riscritto in C per la velocità e che la mia implementazione di Python servirebbe solo come prototipo. Se io cortesi sotto un time-crisi ho molto bene forse ho saltato l'implementazione di Python del tutto. Si scopre però che IO è un collo di bottiglia molto prima di quanto CPU che la minore efficienza di Python poco importa affatto nella pratica. Come ho detto, però, l'approccio evolutivo è anche una grande debolezza di grafite. Interfacce, si scopre, non si prestano bene alla graduale evoluzione. Una buona interfaccia è coerente e si avvale di convenzioni per massimizzare la prevedibilità. Con questa misura, API Graphites URL è attualmente un'interfaccia sub-par a mio parere. Opzioni e funzioni sono state appiccicato nel corso del tempo, a volte formando piccole isole di coerenza, ma nel complesso manca un senso globale di coerenza. L'unico modo per risolvere un tale problema è attraverso il controllo delle versioni delle interfacce, ma anche questo ha svantaggi. Una volta che una nuova interfaccia è stata progettata, quello vecchio è ancora difficile da eliminare, indugiando attorno come bagaglio evolutivo come l'appendice umana. Può sembrare innocuo abbastanza fino a quando un giorno il vostro codice ottiene appendicite (vale a dire un bug legato alla vecchia interfaccia) e tu sei costretto a operare. Se dovessi cambiare una cosa Grafite nella fase iniziale, sarebbe stato prendere maggiore cura nella progettazione delle API esterne, pensando al futuro, invece di evolvere loro a poco a poco. Un altro aspetto della grafite che provoca una certa frustrazione è limitata flessibilità del modello di denominazione metrica gerarchico. Mentre è abbastanza semplice e molto conveniente per la maggior parte dei casi d'uso, rende alcune query sofisticati molto difficile, se non impossibile, esprimere. Quando ho pensato di creare Grafite Sapevo fin dall'inizio che volevo un API URL umano modificabile per la creazione di grafici 7. Mentre Im ancora contento che Grafite fornisce questo oggi, Im paura che questo requisito è appesantito l'API con la sintassi eccessivamente semplice che rende espressioni complesse ingombrante. Una gerarchia rende il problema di determinare la chiave primaria per una metrica molto semplice perché un percorso è essenzialmente una chiave primaria per un nodo nell'albero. Lo svantaggio è che tutti i dati descrittivi (cioè dati di colonna) devono essere inseriti direttamente nel percorso. Una possibile soluzione è quella di mantenere il modello gerarchico e aggiungere un database metadati separata per consentire la selezione più avanzato di metriche con una sintassi speciale. 7.11. Diventare Open Source Guardando indietro alla evoluzione della grafite, sono ancora sorpreso sia da quanto è venuto come un progetto e da quanto mi ha preso come programmatore. E 'iniziato come un progetto di pet che era a poche centinaia di righe di codice. Il motore di rendering è iniziato come un esperimento, semplicemente per vedere se potevo scrivere una. sussurro è stato scritto nel corso di un week-end per disperazione per risolvere un problema show-stopper prima di una data di lancio critica. di carbonio è stato riscritto più volte di quanto mi piaccia ricordare. Una volta mi è stato permesso di rilasciare grafite sotto una licenza open source nel 2008 ho mai veramente aspettavo molto di risposta. Dopo pochi mesi è stato menzionato in un articolo di CNET che siamo stati prelevati da Slashdot e il progetto improvvisamente decollato ed è attiva da allora. Oggi ci sono decine di grandi e medie aziende che utilizzano grafite. La comunità è molto attiva e continua a crescere. Lontano dall'essere un prodotto finito, c'è molto lavoro sperimentale freddo svolto, che mantiene divertente lavorare e piena di potenzialità. launchpadgraphite C'è un'altra porta su cui oggetti serializzati possono essere inviati, che è più efficiente rispetto al formato di testo semplice. Questo è necessario solo per livelli molto alti di traffico. unità memcached. org a stato solido in genere sono estremamente veloce cercare volte rispetto ai dischi rigidi tradizionali. file RRD sono in realtà i nodi filiali perché possono contenere più fonti di dati un'origine dati RRD è un nodo foglia. Knuth ha significato specificamente ottimizzazione del codice di basso livello, non macroscopica ottimizzazione quali miglioramenti del design. Questo costringe i grafici stessi di essere open source. Chiunque può semplicemente guardare un URL grafici per capire o modificare itBSD Planet 24 febbraio 2017 La seconda release candidate di NetBSD 7.1 è ora disponibile per il download all'indirizzo: Quelli di voi che preferiscono costruire da fonte può continuare a seguire la netbsd-7 filiale o utilizzare il tag netbsd-7-1-RC2. La maggior parte delle modifiche apportate dal 7.1RC1 sono stati aggiornamenti di sicurezza. Vedi srcdocCHANGES-7.1 per la lista completa. Si prega di aiutarci testando 7.1RC2. Noi amiamo ogni e qualsiasi feedback. Segnala problemi attraverso i consueti canali (inviare un PR o scrivere alla lista appropriata). feedback più generale è il benvenuto a email160protected 23 febbraio 2017 Obiettivi: usare pkgcomp 2.0 per costruire un archivio binaria di tutti i pacchetti che sono interessati a mantenere il fresco repository su base giornaliera e di utilizzare tale repository con pkgin per mantenere i Macos sistema di up-to-date e sicuro. Questo tutorial è specificamente rivolto a MacOS e si basa sul MacOS specifico pacchetto di auto-installazione. Per un'esercitazione più generico che utilizza il pacchetto pkgcomp-cron in pkgsrc, vedere Mantenere NetBSD up-to-date con pkgcomp 2.0. Per iniziare prima scaricare e installare il pacchetto di installazione standalone MacOS. Per trovare il file giusto, passare alla pagina di stampa su GitHub. scegliere la versione più recente, e scaricare il file con un nome del modulo pkgcomp-ltversiongt-macos. pkg. Quindi fare doppio clic sul file scaricato e seguire le istruzioni per l'installazione. Vi verrà chiesta la password di amministratore, perché il programma di installazione deve posizionare i file nella nota usrlocal che pkgcomp richiede i privilegi di root in ogni caso per l'esecuzione (perché utilizza chroot (8) internamente), in modo da avere a concedere l'autorizzazione ad un certo punto o un altro. Il programma di installazione modifica il percorso predefinito (con la creazione di etcpaths. dpkgcomp) per includere pkgcomps proprio prefisso di installazione directory di installazione e pkgsrcs. Riavviare le sessioni di shell per fare questo cambiamento efficace, o aggiornare i propri script di avvio della shell di conseguenza se non utilizzare quelle standard. Infine, assicurarsi di avere installato Xcode nella posizione standard ApplicationsXcode. app e che tutti i componenti necessari per costruire le applicazioni a riga di comando sono disponibili. Suggerimento: provare a eseguire cc da linea di comando e vedere se la stampa il suo messaggio di utilizzo. Adjusting the configuration The macOS flavor of pkgcomp is configured with an installation prefix of usrlocal. which means that the executable is located in usrlocalsbinpkgcomp and the configuration files are in usrlocaletcpkgcomp. This is intentional to keep the pkgcomp installation separate from your pkgsrc installation so that it can run no matter what state your pkgsrc installation is in. The configuration files are as follows: usrlocaletcpkgcompdefault. conf. This is pkgcomps own configuration file and the defaults configured by the installer should be good to go for macOS. In particular, packages are configured to go into optpkg instead of the traditional usrpkg. This is a necessity because the latter is not writable starting with OS X El Capitan thanks to System Integrity Protection (SIP). usrlocaletcpkgcompsandbox. conf. This is the configuration file for sandboxctl, which is the support tool that pkgcomp uses to manage the compilation sandbox. The default settings configured by the installer should be good. usrlocaletcpkgcompextra. mk. conf. This is pkgsrcs own configuration file. In here, you should configure things like the licenses that are acceptable to you and the package-specific options youd like to set. You should not configure the layout of the installed files (e. g. LOCALBASE ) because thats handled internally by pkgcomp as specified in default. conf . usrlocaletcpkgcomplist. txt. This determines the set of packages you want to build automatically (either via the auto command or your periodic cron job). The automated builds will fail unless you list at least one package. Make sure to list pkgin here to install a better binary package management tool. Youll find this very handy to keep your installation up-to-date. Note that these configuration files use the varpkgcomp directory as the dumping ground for: the pkgsrc tree, the downloaded distribution files, and the built binary packages. We will see references to this location later on. The cron job The installer configures a cron job that runs as root to invoke pkgcomp daily. The goal of this cron job is to keep your local packages repository up-to-date so that you can do binary upgrades at any time. You can edit the cron job configuration interactively by running sudo crontab - e . This cron job wont have an effect until you have populated the list. txt file as described above, so its safe to let it enabled until you have configured pkgcomp. If you want to disable the periodic builds, just remove the pkgcomp entry from the crontab. On slow machines, or if you are building a lot of packages, you may want to consider decreasing the build frequency from daily to weekly . Sample configuration Here is what the configuration looks like on my Mac Mini as dumped by the config subcommand. Use this output to get an idea of what to expect. Ill be using the values shown here in the rest of the tutorial: Building your own packages by hand Now that you are fully installed and configured, youll build some stuff by hand to ensure the setup works before the cron job comes in. The simplest usage form, which involves full automation and assumes you have listed at least one package in list. txt. is something like this: This trivially-looking command will: clone or update your copy of pkgsrc create the sandbox bootstrap pkgsrc and pbulk use pbulk to build the given packages and destroy the sandbox. After a successful invocation, youll be left with a collection of packages in the varpkgcomppackages directory. If youd like to restrict the set of packages to build during a manually-triggered build, provide those as arguments to auto. This will override the contents of AUTOPACKAGES (which was derived from your list. txt file). But what if you wanted to invoke all stages separately, bypassing auto. The command above would be equivalent to: Go ahead and play with these. You can also use the sandbox-shell command to interactively enter the sandbox. See pkgcomp(8) for more details. Lastly note that the root user will receive email messages if the periodic pkgcomp cron job fails, but only if it fails. That said, you can find the full logs for all builds, successful or not, under varpkgcomplog . Installing the resulting packages Now that you have built your first set of packages, you will want to install them. This is easy on macOS because you did not use pkgsrc itself to install pkgcomp. First, unpack the pkgsrc installation. You only have to do this once: Thats it. You can now install any packages you like: The command above assume you have restarted your shell to pick up the correct path to the pkgsrc installation. If the call to pkgadd fails because of a missing binary, try restarting your shell or explicitly running the binary as optpkgsbinpkgadd . Keeping your system up-to-date Thanks to the cron job that builds your packages, your local repository under varpkgcomppackages will always be up-to-date you can use that to quickly upgrade your system with minimal downtime. Assuming you are going to use pkgtoolspkgin as recommended above (and why not), configure your local repository: And, from now on, all it takes to upgrade your system is: February 22, 2017 At the obvious risk of this post getting downvoted and eventually closed as too biasedopionated, Id nevertheless ask this question. The NetBSD projects tagline is of course, it runs NetBSD. I understand that one of the main goals is to run on every possible hardware out there (pages on the internet are full of possible hyperbole, such as anything with a computing chip in it, even a toaster shall run NetBSD). However, if you examine the webpages of IoT hardware from mid-2010s, there is poor visibility of NetBSD as the first choice of OS. Per esempio. on the Raspberry Pi, Raspbian OS is regarded as the go-to starter OS. Arduinos Wikipedia page says that it runs either Windows, macOS or Linux. Snappy Ubuntu-Core and even Win10 IoT (gasp) are staking a claim as leading OSes in the IoT market. While I understand that the last two OSes mentioned above have corporate muscle-power behind them, even open-source job requirement listings do not place much emphasis on NetBSD expertise. The question distills down to: Why is NetBSD not considered the first-rate choice in these IoT hardware. This seems as an anti-pattern given the projects canonical goals All of a sudden (read: without changing any parameters) my netbsd virtualmachine started acting oddly. The symptoms concern ssh tunneling. From my laptop I launch: Then, in another shell: The ssh debug says: I tried also with localhost:80 to connect to the (remote) web server, with identical results. The remote host runs NetBSD: I am a bit lost. I tried running tcpdump on the remote host, and I spotted these bad chksum: I tried restarting the ssh daemon to no avail. I havent rebooted yet - perhaps somebody here can suggest other diagnostics. I think it might either be the virtual network card driver, or somebody rooted our ssh. February 20, 2017 Introduction I have been working on and off for almost a year trying to get reproducible builds (the same source tree always builds an identical cdrom) on NetBSD. I did not think at the time it would take as long or be so difficult, so I did not keep a log of all the changes I needed to make. I was also not the only one working on this. Other NetBSD developers have been making improvements for the past 6 years. I would like to acknowledge the NetBSD build system (aka build. sh ) which is a fully portable cross-build system. This build system has given us a head-start in the reproducible builds work. I would also like to acknowledge the work done by the Debian folks who have provided a platform to run, test and analyze reproducible builds. Special mention to the diffoscope tool that gives an excellent overview of whats different between binary files, by finding out what they are (and if they are containers what they contain) and then running the appropriate formatter and diff program to show whats different for each file. Finally other developers who have started, motivated and did a lot of work getting us here like Joerg Sonnenberger and Thomas Klausner for their work on reproducible builds, and Todd Vierling and Luke Mewburn for their work on build. sh. Sources of difference Heres is what we found that we needed to fix, how we chose to fix it and why, and where are we now. There are many reasons why two separate builds from the same sources can be different. Heres an (incomplete) list: timestamps Many things like to keep track of timestamps, specially archive formats ( tar(1) . ar(1) ), filesystems etc. The way to handle each is different, but the approach is to make them either produce files with a 0 timestamp (where it does not matter like ar), or with a specific timestamp when using 0 does not make sense (it is not useful to the user). datestimesauthors etc. embedded in source files Some programs like to report the datetime they were built, the author, the system they were built on etc. This can be done either by programmatically finding and creating source files containing that information during build time, or by using standard macros such as DATE, TIME etc. Usually putting a constant time or eliding the information (such as we do with kernels and bootblocks) solves the problem. timezone sensitive code Certain filesystem formats (iso 9660 etc.) dont store raw timestamps but formatted times to achieve this they convert from a timestamp to localtime, so they are affected by the timezone. directory orderbuild order The build order is not constant especially in the presence of parallel builds neither is directory scan order. If those are used to create output files, the output files will need to be sorted so they become consistent. non-sanitized data stored into files Writing data structures into raw files can lead to problems. Running the same program in different operating systems or using ASLR makes those issues more obvious. symbolic linkspaths Having paths embedded into binaries (specially for debugging information) can lead to binary differences. Propagation of the logical path can prove problematic. general tool inconsistencies gcc(1) profiling uses a PROFILEHOOK macro on RISC targets that utilizes the current function number to produce labels. Processing order of functions is not guaranteed. gpt(8) creation involves uuid generation these are generally random. block allocation on msdos filesystems had a random component. makefs(8) uses timezones with timestamps (iso9660 ), randomness for block selection (msdos ), stores stray pointers in superblock (ffs ). Every program that is used to generate other output needs to have consistent results. In NetBSD this is done with build. sh. which builds a set of tools from known sources before it can use those tools to build the rest of the system). There is a large number of tools. There are also internal issues with the tools that make their output non reproducible, such as nondeterministic symbol creation or capturing parts of the environment in debugging information. build information tunables environment There are many environment settings, or build variable settings that can affect the build. This needs to be kept constant across builds so weve changed the list of variables that are reported in Makefile. params. making sure that the source tree has no local changes Variables controlling reproducible builds Reproducible builds are controlled on NetBSD with two variables: MKREPRO (which can be set to yes or no) and MKREPROTIMESTAMP which is used to set the timestamp of the builds artifacts. This is usually set to the number of seconds from the epoch. The build. sh - P flag handles reproducible builds automatically: sets the MKREPRO variable to yes, and then finds the latest source file timestamp in the tree and sets MKREPROTIMESTAMP to that. Handling timestamps The first thing that we needed to understand was how to deal with timestamps. Some of the timestamps are not very useful (for example inside random ar archives) so we choose to 0 them out. Others though become annoying if they are all 0. What does it mean when you mount install media and all the dates on the files are Jan 1, 1970 We decided that a better timestamp would be the timestamp of the most recently modified file in the source tree. Unfortunately this was not easy to find on NetBSD, because we are still using CVS as the source control system, and CVS does not have a good way to provide that. For that we wrote a tool called cvslatest. that scans the CVS metadata files (CVSEntries) and finds the latest commit. This works well for freshly checked out trees (since CVS uses the source timestamp when checking out), but not with updated trees (because CVS uses the current time when updating files, so that make(1) thinks theyve been modified). To fix that, weve added a new flag to the cvs(1) update command - t . that uses the source checkout time. The build system needs now to evaluate the tree for the latest file running cvslatest(1) and find the latest timestamp in seconds from the Epoch which is set in the MKREPROTIMESTAMP variable. This is the same as SOURCEDATEEPOCH. Various Makefiles are using this variable and MKRERPO to determine how to produce consistent build artifacts. For example many commands ( tar(1) . makefs(8) . gpt(8) . ) have been modified to take a --timestamp or - T command line switch to generate output files that use the given timestamp, instead of the current time. Other software (am-utils, acpica, bootblocks, kernel) used DATE or TIME, or captured the user, machine, etc. from the environment and had to be changed to a constant time, user, machine, etc. roff(7) documents used the td macro to generate the date of formatting in the document have been changed to conditionally use the macro based on register R . for example as in intro. me and then the Makefile was changed to set that register for MKREPRO. Handling Order We dont control the build order of things and we also dont control the directory order which can be filesystem dependent. The collation order also is environment specific, and sorting needs to be stable (we have not encountered that problem yet). Two different programs caused us problems here: file(1) with the generation of the compiled magic file using directory order (fixed by changing file(1) ). install-info(1) . texinfo(5) files that have no specific order. For that we developed another tool called sortinfo(1) that sorts those files as a post-process step. Fortunately the filesystem builders and tar programs usually work with input directories that appear to have a consistent order so far, so we did not have to fix things there. Permissions NetBSD already keeps permissions for most things consistent in different ways: the build system uses install(8) and specifies ownership and mode. the mtree(8) program creates build artifacts using consistent ownership and permissions. Nevertheless, the various architecture-specific distribution media installers used cp(1) mkdir(1) and needed to be corrected. Most of the issues found had to do with capturing the environment in debugging information. The two biggest issues were: DWATProducer and DWATcompdir . Here you see two changes we made for reproducible builds: We chose to allow variable names (and have gcc(1) expand them) for the source of the prefix map because the source tree location can vary. Others have chosen to skip - fdebug-prefix-map from the variables to be listed. We added - fdebug-regex-map so that we could handle the NetBSD specific objdir build functionality. Object directories can have many flavors in NetBSD so it was difficult to use - fdebug-prefix-map to capture that. DWATcompdir presented a different challenge. We got non-reproducibility when building on paths where either the source or the object directories contained symbolic links. Although gcc(1) does the right thing handling logical paths (respects PWD), we found that there were problems both in the NetBSD sh(1) (fixed here ) and in the NetBSD make(1) (fixed here ). Unfortunately we cant depend on the shell to obey the logical path so we decided to go with: This works because make(1) is a tool (part of the toolchain we provide) whereas sh(1) is not. Another weird issue popped up on sparc64 where a single file in the whole source tree does not build reproducibly. This file is asn1krb5asn1.c which is generated in here. The problem is that when profiling on RISC machines gcc uses the PROFILEHOOK macro which in turn uses the function number to generate labels. This number is assigned to each function in a source file as it is being compiled. Unfortunately this number is not deterministic because of optimization (a bug), but fortunately turning optimization off fixes the problem. Status and future work As of 2017-02-20 we have fully reproducible builds on amd64 and sparc64. We are planning to work on the following areas: Vary more parameters on the system build (filesystem types, build OSs) Verify that cross building is reproducible Verify that unprivileged builds work Test on all the platforms February 19, 2017 At the second annual PillarCon. I facilitated a workshop called Fundamentals of C and Embedded using Mob Programming. On a Mac, we test-drove toggling a Raspberry Pis onboard LED. Before and after Before: ACT LED off Here are the takeaways we wrote down: Could test return type of main() Why wasnt numcalls 0 to begin with Maybe provide the mocks in advance (maybe use CMock ) Fun idea: fake GPIO device Vim tricks Cool But maybe use an easier editor for target audience Appropriate amount of effort need bigger payoff Mob programming supported the learning processobjective My own thoughts for next time I do this material: Try: providing the mocks in the starting state Keep: providing multi-target Makefile and prebuilt cross compiler Try: using a more discoverable (e. g. non-modal) text editor Keep: being prepared with a test list Try: providing already-written test cases to uncomment one at a time (one of the aspects of James Grennings training course I especially loved) Keep: being prepared with corners to cut if time gets short Try: knowing more of the mistakes we might make when cutting corners Keep: mobbing Participants who already knew some of this stuff liked the mobbing (new to some of them) and appreciated how I structured the material to unfold. Participants who were new to C andor embedded (my target audience) came away feeling that they neednt be intimidated by it, and that programming in this context can be as fun and feedbacky as theyre accustomed to. Play along at home Then follow the steps outlined in the README . Further learning Youre welcome to use the workshop materials for any purpose, including your own workshop. If you do, Id love to hear about it. Or if youd like me to come facilitate it for your company, meetup group, etc. lets talk. February 18, 2017 This is a tutorial to guide you through the shiny new pkgcomp 2.0 on NetBSD. Goals: to use pkgcomp 2.0 to build a binary repository of all the packages you are interested in to keep the repository fresh on a daily basis and to use that repository with pkgin to maintain your NetBSD system up-to-date and secure. This tutorial is specifically targeted at NetBSD but should work on other platforms with some small changes. Expect, at the very least, a macOS-specific tutorial as soon as I create a pkgcomp standalone installer for that platform. Getting started First install the sysutilssysbuild-user package and trigger a full build of NetBSD so that you get usable release sets for pkgcomp. See sysbuild(1) and pkginfo sysbuild-user for details on how to do so. Alternatively, download release sets from the FTP site and later tell pkgcomp where they are. Then install the pkgtoolspkgcomp-cron package. The rest of this tutorial assumes you have done so. Adjusting the configuration To use pkgcomp for periodic builds, youll need to do some minimal edits to the default configuration files. The files can be found directly under varpkgcomp. which is pkgcomp-cron s home: varpkgcomppkgcomp. conf. This is pkgcomps own configuration file and the defaults installed by pkgcomp-cron should be good to go. The contents here are divided in three major sections: declaration on how to download pkgsrc, definition of the file system layout on the host machine, and definition of the file system layout for the built packages. You may want to customize the target system paths, such as LOCALBASE or SYSCONFDIR. but you should not have to customize the host system paths. varpkgcompsandbox. conf. This is the configuration file for sandboxctl. The default settings installed by pkgcomp-cron should suffice if you used the sysutilssysbuild-user package as recommended otherwise tweak the NETBSDNATIVERELEASEDIR and NETBSDSETSRELEASEDIR variables to point to where the downloaded release sets are. varpkgcompextra. mk. conf. This is pkgsrcs own configuration file. In here, you should configure things like the licenses that are acceptable to you and the package-specific options youd like to set. You should not configure the layout of the installed files (e. g. LOCALBASE ) because thats handled internally by pkgcomp as specified in pkgcomp. conf . varpkgcomplist. txt. This determines the set of packages you want to build in your periodic cron job. The builds will fail unless you list at least one package. WARNING: Make sure to include pkgcomp-cron and pkgin in this list so that your binary kit includes these essential package management tools. Otherwise youll have to deal with some minor annoyances after rebootstrapping your system. Lastly, review roots crontab to ensure the job specification for pkgcomp is sane. On slow machines, or if you are building many packages, you will probably want to decrease the build frequency from daily to weekly . Sample configuration Here is what the configuration looks like on my NetBSD development machine as dumped by the config subcommand. Use this output to get an idea of what to expect. Ill be using the values shown here in the rest of the tutorial: Building your own packages by hand Now that you are fully installed and configured, youll build some stuff by hand to ensure the setup works before the cron job comes in. The simplest usage form, which involves full automation, is something like this: This trivially-looking command will: checkout or update your copy of pkgsrc create the sandbox bootstrap pkgsrc and pbulk use pbulk to build the given packages and destroy the sandbox. After a successful invocation, youll be left with a collection of packages in the directory you set in PACKAGES. which in the default pkgcomp-cron installation is varpkgcomppackages . If youd like to restrict the set of packages to build during a manually-triggered build, provide those as arguments to auto. This will override the contents of AUTOPACKAGES (which was derived from your list. txt file). But what if you wanted to invoke all stages separately, bypassing auto. The command above would be equivalent to: Go ahead and play with these. You can also use the sandbox-shell command to interactively enter the sandbox. See pkgcomp(8) for more details. Lastly note that the root user will receive email messages if the periodic pkgcomp cron job fails, but only if it fails. That said, you can find the full logs for all builds, successful or not, under varpkgcomplog . Installing the resulting packages Now that you have built your first set of packages, you will want to install them. On NetBSD, the default pkgcomp-cron configuration produces a set of packages for usrpkg so you have to wipe your existing packages first to avoid build mismatches. WARNING: Yes, you really have to wipe your packages. pkgcomp currently does not recognize the package tools that ship with the NetBSD base system (i. e. it bootstraps pkgsrc unconditionally, including bmake ), which means that the newly-built packages wont be compatible with the ones you already have. Avoid any trouble by starting afresh. To clean your system, do something like this: Now, rebootstrap pkgsrc and reinstall any packages you previously had: Finally, reconfigure any packages where you had have previously made custom edits. Use the backup in rootetc. old to properly update the corresponding files in etc. I doubt you made a ton of edits so this should be easy. IMPORTANT: Note that the last command in this example includes pkgin and pkgcomp-cron. You should install these first to ensure you can continue with the next steps in this tutorial. Keeping your system up-to-date If you paid attention when you installed the pkgcomp-cron package, you should have noticed that this configured a cron job to run pkgcomp daily. This means that your packages repository under varpkgcomppackages will always be up-to-date so you can use that to quickly upgrade your system with minimal downtime. Assuming you are going to use pkgtoolspkgin (and why not), configure your local repository: And, from now on, all it takes to upgrade your system is: Lots of storage this week. February 17, 2017 After many (many) years in the making, pkgcomp 2.0 and its companion sandboxctl 1.0 are finally here Read below for more details on this launch. I will publish detailed step-by-step tutorials on setting up periodic package rebuilds in separate posts. What are these tools pkgcomp is an automation tool to build pkgsrc binary packages inside a chroot-based sandbox. The main goal is to fully automate the process and to produce clean and reproducible packages. A secondary goal is to support building binary packages for a different system than the one doing the builds: e. g. building packages for NetBSDi386 6.0 from a NetBSDamd64 7.0 host. The highlights of pkgcomp 2.0 . compared to the 1.x series, are: multi-platform support . including NetBSD, FreeBSD, Linux, and macOS use of pbulk for efficient builds management of the pkgsrc tree itself via CVS or Git and a more robust and modern codebase . sandboxctl is an automation tool to create and manage chroot-based sandboxes on a variety of operating systems . sandboxctl is the backing tool behind pkcomp. sandboxctl hides the details of creating a functional chroot sandbox on all supported operating systems in some cases, like building a NetBSD sandbox using release sets, things are easy but in others, like on macOS, they are horrifyingly difficult and brittle. Storytelling time pkgcomps history is a long one. pkgcomp 1.0 first appeared in pkgsrc on September 6th, 2002 as the pkgtoolspkgcomp package in pkgsrc. As of this writing, the 1.x series are at version 1.38 and have received contributions from a bunch of pkgsrc developers and external users even more, the tool was featured in the BSD Hacks book back in 2004. This is a long time for a shell script to survive in its rudimentary original form: pkgcomp 1.x is now a teenager at its 14 years of age and is possibly one of my longest-living pieces of software still in use. Motivation for the 2.x rewrite For many of these years, I have been wanting to rewrite pkgcomp to support other operating systems. This all started when I first got a Mac in 2005, at which time pkgsrc already supported Darwin but there was no easy mechanism to manage package updates. What would happenand still happens to this dayis that, once in a while, Id realize that my packages were out of date (read: insecure) so Id wipe the whole pkgsrc installation and start from scratch. Very inconvenient I had to automate that properly. Thus the main motivation behind the rewrite was primarily to support macOS because this was, and still is, my primary development platform. The secondary motivation came after writing sysbuild in 2012, which trivially configured daily builds of the NetBSD base system from cron I wanted the exact same thing for my packages. One, two no, three rewrites The first rewrite attempt was sometime in 2006, soon after I learned Haskell in school. Why Haskell Just because that was the new hotness in my mind and it seemed like a robust language to drive a pretty tricky automation process. That rewrite did not go very far, and thats possibly for the better: relying on Haskell would have decreased the portability of the tool, made it hard to install it, and guaranteed to alienate contributors. The second rewrite attempt started sometime in 2010, about a year after I joined Google as an SRE. This was after I became quite familiar with Python at work, wanting to use the language to rewrite this tool. That experiment didnt go very far though, but I cant remember why probably because I was busy enough at work and creating Kyua. The third and final rewrite attempt started in 2013 while I had a summer intern and I had a little existential crisis. The year before I had written sysbuild and shtk. so I figured recreating pkgcomp using the foundations laid out by these tools would be easy. And it was to some extent. Getting the barebones of a functional tool took only a few weeks, but that code was far from being stable, portable, and publishable. Life and work happened, so this fell through the cracks until late last year, when I decided it was time to close this chapter so I could move on to some other project ideas. To create the focus and free time required to complete this project, I had to shift my schedule to start the day at 5am instead of 7amand, many weeks later, the code is finally here and Im still keeping up with this schedule. Granted: this third rewrite is not a fancy one, but it wasnt meant to be. pkgcomp 2.0 is still written in shell, just as 1.x was, but this is a good thing because bootstrapping on all supported platforms is easy. I have to confess that I also considered Go recently after playing with it last year but I quickly let go of that thought: at some point I had to ship the 2.0 release, and 10 years since the inception of this rewrite was about time. The launch of 2.0 On February 12th, 2017, the authoritative sources of pkgcomp 1.x were moved from pkgtoolspkgcomp to pkgtoolspkgcomp1 to make room for the import of 2.0. Yes, the 1.x series only existed in pkgsrc and the 2.x series exist as a standalone project on GitHub . And here we are. Today, February 17th, 2017, pkgcomp 2.0 saw the light Why sandboxctl as a separate tool sandboxctl is the supporting tool behind pkgcomp, taking care of all the logic involved in creating chroot-based sandboxes on a variety of operating systems. Some are easy, like building a NetBSD sandbox using release sets, and others are horrifyingly difficult like macOS. In pkgcomp 1.x, this logic used to be bundled right into the pkgcomp code, which made it pretty much impossible to generalize for portability. With pkgcomp 2.x, I decided to split this out into a separate tool to keep responsibilities isolated. Yes, the integration between the two tools is a bit tricky, but allows for better testability and understandability. Lastly, having sandboxctl as a standalone tool, instead of just a separate code module, gives you the option of using it for your own sandboxing needs. I know, I know the world has moved onto containerization and virtual machines, leaving chroot-based sandboxes as a very rudimentary thing but thats all weve got in NetBSD, and pkgcomp targets primarily NetBSD. Note, though, that because pkgcomp is separate from sandboxctl, there is nothing preventing adding different sandboxing backends to pkgcomp. Installation Installation is still a bit convoluted unless you are on one of the tier 1 NetBSD platforms or you already have pkgsrc up and running. For macOS in particular, I plan on creating and shipping a installer image that includes all of pkgcomp dependenciesbut I did not want to block the first launch on this. For now though, you need to download and install the latest source releases of shtk. sandboxctl. and pkgcomp in this order pass the --with-atfno flag to the configure scripts to cut down the required dependencies. On macOS, you will also need OSXFUSE and the bindfs file system. If you are already using pkgsrc, you can install the pkgtoolspkgcomp package to get the basic tool and its dependencies in place, or you can install the wrapper pkgtoolspkgcomp-cron package to create a pre-configured environment with a daily cron job to run your builds. See the packages MESSAGE (with pkginfo pkgcomp-cron ) for more details. Documentation Both pkgcomp and sandboxctl are fully documented in manual pages. See pkgcomp(8). sandboxctl(8). pkgcomp. conf(5) and sandbox. conf(5) for plenty of additional details. As mentioned at the beginning of the post, I plan on publishing one or more tutorials explaining how to bootstrap your pkgsrc installation using pkgcomp on, at least, NetBSD and macOS. Stay tuned. And, if you need support or find anything wrong, please let me know by filing bugs in the corresponding GitHub projects: jmmvpkgcomp and jmmvsandboxctl . February 16, 2017 I claim an IPv6 address using ifconfig in a script. This address is then immediately used to listen on a TCP port. When I write the script like this, it fails because the service is unable to listen: However, it succeeds when I do it like this: I tried writing the output of ifconfig directly after running the add - operation. It appears that ifconfig reports the IP-address as being tentative . which apparently prevents a service from listening on it. Naturally, waiting exactly one second and hoping that the address has become available is not a very good way to handle this. How can I wait for a tentative address to become available, or make ifconfig return later so that the address is all set up I finally registered, have been reading the forum for years. Ill simply copy this from LQ. Already have written to a couple of lists (including netbsd-users) but without results. Running 7.0.2 with out of the box kernel. All my GTK2 apps segfault on keyboard input. lxappearance for example, when looking for a theme you can start pressing keys and it will search. But in my case it dumps core with usrliblibpthread. so.1 . usrliblibc. so.12 and usrpkgliblibXcursor. so.1 . The same thing happens when typing something into a GTK2 text editor, leafpad, or looking for something in CtrlO window in firefox or gimp or any other programme. gimp cant even run inside gdb because of: Program received signal SIGTRAP, Tracebreakpoint trap. 0x00007f7fea49f6aa in lwppark60 () from usrliblibc. so.12 (gdb) bt 0 0x00007f7fea49f6aa in lwppark60 () from usrliblibc. so.12 1 0x00007f7fec808f2b in pthreadcondtimedwait () from usrliblibpthread. so.1 2 0x00007f7feb880b80 in gcondwait () from usrpkgliblibglib-2.0.so.0 3 0x00007f7feb81d7cd in gasyncqueuepopinternunlocked () from usrpkgliblibglib-2.0.so.0 4 0x00007f7feb86742f in gthreadpoolthreadproxy () from usrpkgliblibglib-2.0.so.0 5 0x00007f7feb866a7d in gthreadproxy () from usrpkgliblibglib-2.0.so.0 6 0x00007f7fec80a9cc in. () from usrliblibpthread. so.1 7 0x00007f7fea483de0 in. () from usrliblibc. so.12 8 0x0000000000000000 in. () Firefox also has problems in libc. so.12 and libpthread. so.1 but doesnt say about lwppark60. It also cant run inside gdb. lxappearance also dumps core when clicking Apply after changing something (themes, cursor or icon themes, fonts etc.) with another output: 0 0x00007f7fefcb27ba in. () from usrliblibc. so.12 1 0x00007f7fefcb2bc7 in malloc () from usrliblibc. so.12 2 0x00007f7ff1849782 in gmalloc () from usrpkgliblibglib-2.0.so.0 3 0x00007f7ff185ef1c in gmemdup () from usrpkgliblibglib-2.0.so.0 4 0x00007f7ff18356b8 in ghashtableinsertnode () from usrpkgliblibglib-2.0.so.0 5 0x00007f7ff1835823 in ghashtableinsertinternal () from usrpkgliblibglib-2.0.so.0 6 0x00007f7ff183ccb1 in gkeyfileflushparsebuffer () from usrpkgliblibglib-2.0.so.0 7 0x00007f7ff183cf62 in gkeyfileparsedata () from usrpkgliblibglib-2.0.so.0 8 0x00007f7ff183d0e1 in gkeyfileloadfromfd () from usrpkgliblibglib-2.0.so.0 9 0x00007f7ff183d99e in gkeyfileloadfromfile () from usrpkgliblibglib-2.0.so.0 10 0x0000000000405532 in start () Apart from these programmes I receive SIGILL in mplayer when trying to play videos. Backtrace doesnt tell anything useful. sxiv, an image viewer, segfaults with this: 0 0x00007f7ff64b209f in. () from usrliblibc. so.12 1 0x00007f7ff64b3983 in free () from usrliblibc. so.12 2 0x000000000040729c in removefile () 3 0x0000000000409a92 in main () Previously, if built from local pkgsrc tree it worked but now stopped working at all at all. mpg321 dumps core and says Memory fault with this backtrace: 0 0x00007f7ff78068b1 in sempost () from usrliblibpthread. so.1 1 0x000000000040afe0 in. () 2 0x0000000000403695 in. () 3 0x00007f7ff7ffa000 in. () 4 0x0000000000000002 in. () 5 0x00007f7ffffffdb0 in. () 6 0x00007f7ffffffdb7 in. () 7 0x0000000000000000 in. () I did memtests, once for four hours (two passes) and once for eight hours (eight passes). I did Dells ePSA tests (diagnostic utility accessed from BIOS), it has own memtest, apart from monitoring the hard drive, the power supply, the keyboard, the fans, the CPU all of them returned no errors. I rebuilt gtk2 with debug symbols but it changed nothing. On LQ it was suggested that I have hardware problems, but I am not convinced. Every programme described above worked inside Ubuntu LiveUSB and Void Linux LiveUSB on the same machine (picked because they have different libcs). Before I had NetBSD with X11 a couple of months ago (and earlier) and I didnt have these errors. In the Interwebs I found similar messages on Arch forum and Launchpad. Is there a need for a 24 hour memtest Should I just remove each of the two memory modules and try Is it hardware related after all Thanks everyone for any kind of help. February 14, 2017 The LLVM project is a quickly moving target, this also applies to the LLVM debugger -- LLDB. Its actively used in several first-class operating systems, while - thanks to my spare time dedication - NetBSD joined the LLDB club in 2014, only lately the native support has been substantially improved and the feature set is quickly approaching the support level of Linux and FreeBSD. During this work 12 patches were committed to upstream, 12 patches were submitted to review, 11 new ATF were tests added, 2 NetBSD bugs filed and several dozens of commits were introduced in pkgsrc-wip, reducing the local patch set to mostly Native Process Plugin for NetBSD. What has been done in NetBSD 1. Triagged issues of ptrace(2) in the DTraceNetBSD support Chuck Silvers works on improving DTrace in NetBSD and he has detected an issue when tracer signals are being ignored in libproc . The libproc library is a compatibility layer for DTrace simulating proc capabilities on the SunOS family of systems. Ive verified that the current behavior of signal routing is incorrect. The NetBSD kernel correctly masks signals emitted by a tracee, not routing them to its tracer. On the other hand the masking rules in the inferior process blacklists signals generated by the kernel, which is incorrect and turns a debugger into a deaf listener. This is the case for libproc as signals were masked and software breakpoints triggering INT3 on i386 amd64 CPUs and SIGTRAP with TRAPBRKP sicode wasnt passed to the tracer. This isnt limited to turning a debugger into a deaf listener, but also a regular execution of software breakpoints requires: rewinding the program counter register by a single instruction, removing trap instruction and restoring the original instruction. When an instruction isnt restored and further code execution is pretty randomly affected, it resulted in execution anomalies and breaking of tracee. A workaround for this is to disable signal masking in tracee. Another drawback inspired by the DTrace code is to enhance PTSYSCALL handling by introducing a way to distinguish syscall entry and syscall exit events. Im planning to add dedicated sicodes for these scenarios. While there, there are users requesting PTSTEP and PTSYSCALL tracing at the same time in an efficient way without involving heuristcs. Ive filed the mentioned bug: Ive added new ATF tests: Verify that masking single unrelated signal does not stop tracer from catching other signals Verify that masking SIGTRAP in tracee stops tracer from catching this raised signal Verify that masking SIGTRAP in tracee does not stop tracer from catching software breakpoints Verify that masking SIGTRAP in tracee does not stop tracer from catching single step trap Verify that masking SIGTRAP in tracee does not stop tracer from catching exec() breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACEFORK breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACEVFORK breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACEVFORKDONE breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACELWPCREATE breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACELWPEXIT breakpoint 2. ELF Auxiliary Vectors The ELF file format permits to transfer additional information for a process with a dedicated container of properties, its named ELF Auxilary Vector . Every system has its dedicated way to read this information in a debugger from a tracee. The NetBSD approach is to transfer this vector with a ptrace (2) API PIODREADAUXV . Our interface shares the API with OpenBSD. I filed a bug that our interface returns vector size of 8496 bytes, while OpenBSD has constant 64 bytes. It was diagnosed and fixed by Christos Zoluas that we were incorrectly counting bits and bytes and this enlarged the data streamlined. The bug was harmless and had no known side-effects besides large chunk of zeroed data. There is also a prepared local patch extending NetBSD platform support to read information for this vector, its primarily required for correct handling of PIE binaries. At the moment there is no interface similar to info auxv to the one from GDB. Unfortunately at the current stage, this code is still unused by NetBSD. I will return to it once the Native Process Plugin is enhanced. Ive filed the mentioned bug: Ive added new ATF test: Verify PTREADAUXV called for tracee . What has been done in LLDB 1. Resolving executables name with sysctl(7) In the past the way to retrieve a specified process executable path name was using Linux-compatibile feature in procfs ( proc ). The canonical solution on Linux is to resolve path of procPIDexe . Christos Zoulas added in DTrace port enhancements a solution similar to FreeBSD to retrieve this property with sysctl (7). This new approach removes dependency on proc mounted and Linux compatibility functionality. Support for this has been submitted to LLDB and merged upstream: 2. Real-Time Signals The key feature of the POSIX standard with Asynchronous IO is to support Real-Time Signals. One of their use-cases is in debugging facilities. Support for this set of signals was developed during Google Summer of Code 2016 by Charles Cui and reviewed and committed by Christos Zoulas. Ive extended the LLDB capabilities for NetBSD to recognize these signals in the NetBSDSignals class. Support for this has been submitted to LLDB and merged upstream: 3. Conflict removal with system-wide six. py The transition from Python 2.x to 3.x is still ongoing and will take a while. The current deadline support for the 2.x generation has been extended to 2020. One of the ways to keep both generations supported in the same source-code is to use the six. py library (py2 x py3 6.py). It abstracts commonly used constructs to support both language families. The issue for packaging LLDB in NetBSD was to install this tiny library unconditionally to a system-wide location. There were several solutions to this approach: drop Python 2.x support, install six. py into subdirectory, make an installation of six. py conditional. The first solution would turn discussion into flamewar, the second one happened to be too difficult to be properly implemented as the changes were invasive and Python is used in several places of the code-base (tests, bindings. ). The final solution was to introduce a new CMake option LLDBUSESYSTEMSIX - disabled by default to retain the current behavior. To properly implement LLDBUSESYSTEMSIX . I had to dig into installation scripts combined in CMake and Python files. It wasnt helping that Python scripts were reinventing getopt (3) functionality. and I had to alter it in order to introduce a new option --useSystemSix . Support for this has been submitted to LLDB and merged upstream: 4. Do not pass non-POD type variables through variadic function There was a long standing local patch in pkgsrc, added by Tobias Nygren and detected with Clang. According to the C11 standard 5.2.27: Passing a potentially-evaluated argument of class type having a non-trivial copy constructor, a non-trivial move constructor, or a non-trivial destructor, with no corresponding parameter, is conditionally-supported with implementation-defined semantics. A short example to trigger similar warning was presented by Joerg Sonnenberg: This code compiled against libc gives: Support for this has been submitted to LLDB and merged upstream: 5. Add NetBSD support in Host::GetCurrentThreadID Linux has a very specific thread model, where process is mostly equivalent to native thread and POSIX thread - its completely different on other mainstream general-purpose systems. That said fallback support to translate pthreadt on NetBSD to retrieve the native integer identifier was incorrect. The proper NetBSD function to retrieve light-weigth process identification is to call lwpself (2). Support for this has been submitted to LLDB and merged upstream: 6. Synchronize PlatformNetBSD with Linux The old PlatformNetBSD code was based on the FreeBSD version. While the FreeBSD current one is still similar to the one from a year ago, its inappropriate to handle a remote process plugin approach. This forced me to base refreshed code on Linux. After realizing that PlatformPlugin on POSIX platforms suffers from code duplication, Pavel Labath helped out to eliminate common functions shared by other systems. This resulted in a shorter patch synchronizing PlatformNetBSD with Linux, this step opened room for FreeBSD to catch up. Support for this has been submitted to LLDB and merged upstream: 7. Transform ProcessLauncherLinux to ProcessLauncherPosixFork It is UNIX specific that signal handlers are global per application. This introduces issues with wait (2)-like functions called in tracers, as these functions tend to conflict with real-life libraries, notably GUI toolkits (where SIGCHLD events are handled). The current best approach to this limitation is to spawn a forkee and establish a remote connection over the GDB protocol with a debugger frontend. ProcessLauncherLinux was prepared with this design in mind and I have added support for NetBSD. Once FreeBSD will catch up, they might reuse the same code. Support for this has been submitted to LLDB and merged upstream: reviews. llvm. orgD29347 - Add ProcessLauncherNetBSD to spawn a tracee renamed to Transform ProcessLauncherLinux to ProcessLauncherPosixFork committed r293768 8. Document that LaunchProcessPosixSpawn is used on NetBSD Host::GetPosixspawnFlags was built for most POSIX platforms - however only Apple, Linux, FreeBSD and other-GLIBC ones (I assume DebiankFreeBSD to be GLIBC-like) were documented. Ive included NetBSD to this list. Support for this has been submitted to LLDB and merged upstream: Document that LaunchProcessPosixSpawn is used on NetBSD committed r293770 9. Switch std::callonce to llvm::callonce There is a long-standing bug in libstdc on several platforms that std::callonce is broken for cryptic reasons. This motivated me to follow the approach from LLVM and replace it with homegrown fallback implementation llvm::callonce . This change wasnt that simple at first sight as the original LLVM version used different semantics that disallowed straight definition of non - static onceflag . Thanks to cooperation with upstream the proper solution was coined and LLDB now works without known regressions on libstdc out-of-the-box. Support for this has been submitted to LLVM, LLDB and merged upstream: 10. Other enhancements I a had plan to push more code in this milestone besides the mentioned above tasks. Unfortunately not everything was testable at this stage. Among the rescheduled projects: In the NetBSD platform code conflict removal in GetThreadName SetThreadName between pthreadt and lwpidt . It looks like another bite from the Linux thread model. Proper solution to this requires pushing forward the Process Plugin for NetBSD. Host::LaunchProcessPosixSpawn proper setting ::posixspawnattrsetsigdefault on NetBSD - currently untestable. Fix false positives - premature before adding more functions in NetBSD Native Process Plugin. On the other hand Ive fixed a build issue of one test on NetBSD: Plan for the next milestone Ive listed the following goals for the next milestone. mark exect (3) obsolete in libc remove libpthreaddbg (3) from the base distribution add new API in ptrace (2) PTSETSIGMASK and PTGETSIGMASK add new API in ptrace (2) to resume and suspend a specific thread finish switch of the PTWATCHPOINT API in ptrace (2) to PTGETDBREGS amp PTSETDBREGS validate i386, amd64 and Xen proper support of new interfaces upstream to LLDB accessors for debug registers on NetBSDamd64 validate PTSYSCALL and add a functionality to detect and distinguish syscall-entry syscall-exit events validate accessors for general purpose and floating point registers Post mortem FreeBSD is catching up after NetBSD changes, e. g. with the following commit: This move allows to introduce further reduction of code-duplication. There still is a lot of room for improvement. Another benefit for other software distributions, is that they can now appropriately resolve the six. py conflict without local patches. These examples clearly show that streamlining NetBSD code results in improved support for other systems and creates a cleaner environment for introducing new platforms. A pure NetBSD-oriented gain is improvement of system interfaces in terms of quality and functionality, especially since DTraceNetBSD is a quick adopter of new interfaces. and indirectly a sandbox to sort out bugs in ptrace (2). The tasks in the next milestone will turn NetBSDs ptrace (2) to be on par with Linux and FreeBSD, this time with marginal differences. To render it more clearly NetBSD will have more interfaces in readwrite mode than FreeBSD has (and be closer to Linux here), on the other hand not so many properites will be available in a thread specific field under the PTLWPINFO operation that caused suspension of the process. Another difference is that FreeBSD allows to trace only one type of syscall events: on entry or on exit. At the moment this is not needed in existing software, although its on the longterm wishlist in the GDB project for Linux. It turned out that, I was overly optimistic about the feature set in ptrace (2), while the basic ones from the first milestone were enough to implement basic support in LLDB. it would require me adding major work in heuristics as modern tracers no longer want to perform guessing what might happened in the code and what was the source of signal interruption. This was the final motivation to streamline the interfaces for monitoring capabilities and now Im adding remaining interfaces as they are also needed, if not readily in LLDB, there is DTrace and other software that is waiting for them now. Somehow I suspect that I will need them in LLDB sooner than expected. This work was sponsored by The NetBSD Foundation. The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue to fund projects and services to the open-source community. Please consider visiting the following URL, and chip in what you can: February 09, 2017 We became tired of waiting. File Info: 7Min, 3MB. Ogg Link: archive. orgdownloadbsdtalk266bsdtalk266.ogg February 08, 2017 Background I am using a sparc64 Sun Blade 2500 (silver) as a desktop machine - for my pretty light desktop needs. Besides the usual developer tools (editors, compilers, subversion, hg, git) and admin stuff (all text based) I need mpg123 and mserv for music queues, Gimp for image manipulation and of course Firefox. Recently I updated all my installed pkgs to pkgsrc-current and as usual the new Firefox version failed to build. Fortunately the issues were minor, as they all had been handled upstream for Firefox 52 already, all I needed to do was back-porting a few fixes. This made the pkg build, but after a few minutes of test browsing, it crashed. Not surprisingly this was reproducible, any web site trying to play audio triggered it. A bit surprising though: the same happened on an amd64 machine I tried next. After a bit digging the bug was easy to fix, and upstream already took the fix and committed it to the libcubeb repository. So I am now happily editing this post using Firefox 51 on the Blade 2500. I saw one crash in two days of browsing, but unfortunately could not (yet) reproduce it (I have gdb attached now). There will be future pkg updates certainly. Future Obstacles You may have read elsewhere that Firefox will start to require a working Rust compiler to build. This is a bit unfortunate, as Rust (while academically interesting) is right now not a very good implementation language if you care about portability. The only available compiler requires a working LLVM back end, which we are still debugging. Our auto-builds produce sparc sets with LLVM, but the result is not fully working (due to what we believe being code gen bugs in LLVM). It seems we need to fix this soon (which would be good anyway, independent of the Rust issue). Besides the back end, only very recently traces of sparc64 support popped up in Rust. However, we still have a few firefox versions time to get it all going. I am optimistic. Another upcoming change is that Cairo (currently used as 2D graphics back end, at least on sparc64) will be phased out and Skia will be the only supported software rendering target. Unfortunately Skia does (as of now) not support any big endian machine at all. I am looking for help getting Skia to work on big endian hardware in general, and sparc64 in particular. Alternatives Just in case, I tested a few other browsers and (so far) they all failed: NetSurf Nice, small, has a few tweaks and does not yet support JavaScript good enough for many sites MidoriThey call it lightweight but it is based on WebKit, which alone is a few times more heavy than all of Firefox. It crashes immediately at startup on sparc64 (I am investigating, but with low priority - actually I had to replace the hard disk in my machine to make enough room for the debug object files for WebKit - it takes So, while it is a bit of a struggle to keep a modern browser working on my favorite odd-ball architecture, it seems we will get at least to the Firefox 52 ESR release, and that should give us enough time to get Rust working and hopefully continue with Firefox. February 07, 2017 So finally Ive moved all services from my old server to my Christmas Xen box. This was not without problems due to the fact it had to run NetBSD - current gcc toolchain is broken for some packages which affected running any PHP build clang toolchain was broken for my config (USESSP yes and . February 04, 2017 Note the end this week of pc98, the most focused of niche platforms. January 31, 2017 What has been done in NetBSD What has been done in LLDB Plan for the next milestone Accidental theme this week: books. What are the techniques generally people follow to dump full core dump if the size of core dump is more than the RAM and flash. Say, kernel core is of 2GB size but we have exactly 2GB of RAM and 1GB of disk space. I am aware external USB and tftp options. But, reliability and stability matters when we choose these options. How do embedded people handle these type of issues and what are the techniques available Platform: NetBSD, ARM7 January 18, 2017 Previously This is the sixth in a series of Nifty and Minimally Invasive qmail Tricks, following Losing services (and eventually restoring them) When my Mac mini s hard drive died in the Great Crash of Fall 2008. taking this website and my email offline with it, I was already going through a rough time, and my mental bandwidth was extremely limited. I expended some of it explaining to friends what they could do about their hosted domains until such time as my brain became available again (as I assumed andor hoped it eventually would). I expended a bit more asking a friend to do a small thing to keep my email flowing somewhere I could get it. And then I was spent. The years where I used Gmail and had no website felt like years in the wilderness. That feeling could mostly have been about how I missed the habit of reflecting about my life now and again, writing about it, and sharing. But when the website returned four years ago (in order to remember Aaron Swartz ), the feeling didnt go away. All I got was a small sense of relief that my writings and recordings were available and that I could safely revive my old habit. After a year and half of reflecting, writing, and sharing, the feels-needle hadnt rebounded much further. It was only after painstakingly restoring all my old email (from Mail. apps cache, using emlx2maildir ), moving it up to my IMAP server, carefully merging six years worth of Gmail into that, accepting SMTP deliveries for schmonz. and not needing Gmail at all for several weeks that I noticed my long, strange sojourn had ended. Hypothetically speaking If it so happened that Id instead fixed email first, Id also have felt a tiny bit weird till my website was back. But only a tiny bit. When my web servers down, you might not hear from me when my mail servers down, I cant hear from you or, as happened in 2008, from my professors during finals week. So while web hosting can be interesting. mail hosting keeps me attached to what it feels like to be responsible for a production service. Keeping it real I value this firsthand understanding very, very highly. I started as a sysadmin, Im often still a developer, and thats part of why Im sometimes helpful to others. But since Im always in danger of forgetting lessons I learned by doing it, Im always in danger of being harmful when I try to help others do it . As a coach, one of my meta-jobs is to remind myself what it takes to know the risks, decide to ship it, live with the consequences, tighten the shipping-it loop until its tight enough, and notice when that stops being true. And thats why I run my own mail server. Whats new this week My 2014 mail server was configured just about identically with my 2008 one, for which it was handy to consult the earlier articles in this series . Then, recently, my weekly build broke on the software Ive been using to send mail. It was a trivial breakage, easy to fix, but it reminded me about a non-trivial future risk that I didnt want hanging over my head anymore. (For more details, see my previous post .) Now Im sending mail another way. Clients are unchanged, the server no longer needs TMDA or its dependencies, and I no longer have a specific expectation for how this aspect of my mail service will certainly break in the future. (Just some vague guesses, like a newly discovered compromise in the TLS protocol or OpenSSLs implementation thereof, or STARTTLS or Stunnel s implementation thereof.) A couple iterations First, I tried the smallest change that might work: Replacing tmda-ofmipd with the original ofmipd from mess822 (by the author of qmail. the software around which my mail service is built), Wrapped in SMTP AUTH by spamdyke (new use of an existing tool), Wrapped in STARTTLS by stunnel (as before). It worked TMDA no longer needed. I committed an update to my qmail-run package with a new shell script to manage this ofmipd service. uninstalled TMDA, and removed its configuration files. Next, I tried a change that might shorten the chain of executables : It worked Second instance of spamdyke no longer needed. To start a mail submission service on localhost port 26, these are the lines I added to etcrc. conf : To make the service available on the network, this is the config from etcstunnelstunnel. conf : (It already had this stanza, but with 8025 where tmda-ofmipd was listening. I simply changed the port number and restarted stunnel .) Im still relying on spamdyke for other purposes, but Im comfortable with those. Im still relying on stunnel for STARTTLS, but Im relatively comfortable keeping OpenSSL contained in its own address space and user account. Refactoring for mail hosting The present configuration is a refactoring. no externally visible change to email clients, yes internally visible change to email administrator (moi). I believe this refactoring was one of the best kind, able to be performed safely and reducing the risk I was worried about. The current configuration is much more likely to meet my future need to not have a production outage that interrupts my work for arbitrary duration while I scramble to understand and fix it. I dont have any more cheap ideas for lowering my risk, and it feels low enough anyway. So Im comfortable that this is the right place to stop . Conclusion Want to learn to see the consequences of your choices andor help other people do the same Consider productionizing something important to you. January 14, 2017 Im trying to compile a program with clang and libc on NetBSD. Clang version is 3.9.0, and NetBSD version is 7.0.2. The compile is failing with: ltcstddefgt is present, but it appears to be GCCs: If I am parsing Index of pubNetBSDNetBSD-release-7srcexternalbsdlibc correctly, the library is available. When I attempt to install libc or libcxx : Is Clang with libc a supported configuration on NetBSD How do we use Clang and libc on NetBSD January 11, 2017 Ill install netbsd on an old computer, but I am sure Ill have a hard time to get wireless internet working in a way or another. I figured I could do that easily if I managed to install things for this computer, on another one, the one I am using now, by crosscompiling. And that it would be a good training, isnt it For now, if pkgadd and so on are recognized, I still cant pkgadd pkgin or any software: it says it doesnt know that package. How come. I see it, its there. Grazie. Heres my PATH variable: PATHusrpkgsbin:usrpkgbin:usrlocalbin:usrbin:bin:usrlocalgames:usrgames ps:some might remember me. Indeed, I failed using this system many time, but I am a romantic, and I cant stop feeling something in my heart anytime I read pkgsrc or netbsd, I just dont know why. so here I am again :D January 09, 2017 NetBSDs scheduler was recently changed to better distribute load of long-running processes on multiple CPUs. So far, the associated sysctl tweaks were not documented, and this was changed now, documenting the kern. sched sysctls. For reference, here is the text that was added to the sysctl(7) manpage. Well, subject says it all. To quote from Soren Jacobsens email. The first release candidate of NetBSD 7.1 is now available for download at: Those of you who prefer to build from source can continue to follow the netbsd-7 branch or use the netbsd-7-1-RC1 tag. There have been quite a lot of changes since 7.0. See srcdocCHANGES-7.1 for the full list. Please help us out by testing 7.1RC1. We love any and all feedback. Report problems through the usual channels (submit a PR or write to the appropriate list). More general feedback is welcome at email160protected Ive installed NetBSD 7.0.1 in a KVM virtual machine under libvirt on a Fedora 25 Linux host. I want to use spice. so i specified the requisite qxl graphic in the virtual machine then installed xf86-video-qxl-0.1.4nb1 with pkgin in the NetBSD guest. But both varlogxdm. log and varlogXorg.0.log complained that they couldnt find the qxl module. Then I realized they were looking in usrX11R7libmodules but the qxl package put it in usrpkglibxorgmodules. To solve that, I manually added a symbolic link. And indeed, that solved the not found problem. (But why the two directories. ) Now they complain that its the wrong driver. Both xdm. log and Xorg.0.log gripe: (EE) module ABI major version (20) doesnt match the servers version (10) (EE) Failed to load module qxl (module requirement mismatch, 0) Why are things out of sync in the NetBSD code base How can anyone get X to work What can I do to solve this January 08, 2017 im trying to install nzbget. i think it was in the pkgsrc way back but its not there anymore. so i tried this: (1) i downloaded the source from nzbget website (2) then. configure said A compiler with support for C14 language features is required.. so i installed gcc6 using pkgin in gcc6 (3) so then i tried PATHusrpkggcc6bin:PATH. configure and it said compiler is ok, but now i got configure: error: ncurses library not found (4) i have ncurses lib in usrpkgincludencurses, how to let. configure know the location of ncurses lib Is it normal that when I use Zlib from Pkgsrc or base as reference via include bl3 for a project (like the current supertuxkart version 0.9.2) that within. buildlinkinclude directory no symlinks exist of zlib. h and zconf. h I newer saw this behaviour before and it breaks the compilation. January 05, 2017 Last night, mere moments from letting me commit a new package of Test::Continuous (continuous testing for Perl), my computer acted as though it knew its replacement was on the way and didnt care to meet it. This tiny mid-2013 11 MacBook Air made it relatively ergonomic to work from planes, buses, and anywhere else when I lived in New York and flew regularly to see someone important in Indiana, and continued to serve me well when that changed and changed again . The next thing I was planning to do with it was write this post. Instead I rebooted into DiskWarrior and crossed my fingers. Things get in your way, or threaten to. Thats life. But when you have slack time. you can Cope better when stuff happens, Invest in reducing obstacles, and Feel more prepared for the next time stuff happens. Having enough slack is as virtuous a cycle as insufficient slack is a vicious one. Paying down non-tech debts Last year I decided to spend more time and energy improving my health. Having recently spent a few weeks deliberately not paying attention to any of that, Im quite sure that I prefer paying attention to it, and am once again doing so. Learning to make my health a priority required that I make other things non-priorities, notably Agile in 3 Minutes. It no longer requires that. Ive recently invested in making the site easier for me to publish, and you may notice that its easier for you to browse. I didnt have enough slack to do these things when I was writing and recording a new episode every week. Now that enough of them have been taken care of, I feel prepared to take new steps with the podcast. And tech debts Earlier this week I noticed a broken link in a comment on Refactorings for web hosting. so I took a moment to check for other broken links on this site (ikiwiki makes it easy ). Before that, I inspected and minimized the differences between dev (my laptop) and prod (my server, where youre reading this), updated prod with the latest ikiwiki settings, and (because its all in Git) rebased dev from prod. In so doing, I observed that more config differences could be easily harmonized by adjusting some server paths to match those on my laptop. (When Apple introduced System Integrity Protection. pkgsrc on Mac OS X could no longer install under usr. and moved to opt. With my automated NetBSD package build. I can easily build the next batch for optpkg as well, retaining usrpkg as a symlink for a while. So I have.) Ive been running lots of these builds in the past week anyway, because a family of packages I maintain in pkgsrc had been outdated for quite a while and I finally got around to catching them up to upstream. Once they built on OS X, I committed the updates to the cross-platform package system. only to notice that at least one of them didnt build on NetBSD. So I fixed it, ran another build, saw what else I broke, and repeated until green. And taking on patience debt telling you about more of this crud Due to another update that temporarily broke the build of TMDA. I was freshly reminded that thats a relatively biggish liability in my server setup. I use TMDA to send mail. which is not mainly what its for, and I never got around to using it for what its for (protecting against spam with automated challenge-response), and it hasnt been maintained for years, and is stuck needing an old version of Python. On the plus side, running a weekly build means that when TMDA breaks more permanently, Ill notice pretty quickly. On the minus side, when that happens, Ill feel pressure to fix or replace it so I can (1) continue to send email like a normal person and (2) restart the weekly build like a me-person. If I can reduce the liability now, maybe I can avoid feeling that pressure later. Investigating alternatives, I remembered that Spamdyke. which I already use for delaying the SMTP greeting. blacklisting from a DNSBL as well as To: addresses that only get spam anymore, and greylisting from unknown senders, can provide SMTP AUTH. So Ill try keeping stunnel and replacing tmda-ofmipd with a second instance of spamdyke. If thats good, Ill remove mailtmda from the list of packages I build every week. then build spamdyke with OpenSSL support and try letting it handle the TLS encryption directly. If thats good, Ill remove securitystunnel from the list of packages too, leaving me at the mercy of fewer pieces of software breaking. Leaning more heavily on Spamdyke isnt a clear net reduction of risk. When a bad bug is found, itll impact several aspects of my mail service. And if and when NetBSD moves from GCC to Clang, Ill have to add langgcc to my list of packages and instruct pkgsrc to use it when building Spamdyke, or else come up with a patch to remove Spamdykes use of anonymous inner functions in C. (That could be fun. I recently started learning C .) I could go on, but Im a nice person who cares about you. Thats enough of that. So what All these builds pushing my soon-to-be-replaced laptop through its final paces as a development machine might have had something to do with triggering its misbehavior last night. And all this work seems like, well, a lot of work. Is there some way I could do less of it Yes, of course. But given my interests and goals, it might not be a clear net improvement. For instance, when Tim Ottinger drew my attention to that Test::Continuous Perl module, being a pkgsrc developer gave me an easy way to uninstall it if I wound up not liking it, which meant it was easy to try, which meant I tried it. I want conditions in my life to favor trying things. So Im invested in preserving and extending those conditions. In Gary Bernhardt s formulation, Im aiming to maximize the area under the curve . No new resolutions, yes new resolvings Im not looking to add new goals for myself for 2017. Im not even trying to make existing things good enough there are too many things, and as a recovering perfectionist I have trouble setting a reasonable bar Im just trying to make them good enough enough that I can expect small slices of time and attention to permit small improvements . Jessica Kerr has a thoughtful side blog named True in software, true in life. Heres something thatd qualify: When conditions are expected to change, smaller batch size helps us adjust. Reducing batch size takes time and effort. Paying down my self-debts (technical and otherwise) feels like resolving . I have, at times, felt quite out of position at managing myself. Lately Im feeling much more in position, and much more like I can expect to continue to make small improvements to my positioning. When you want the option to change your bodys direction, you take smaller steps, lower your center, concentrate on balance. Thats Agile. Moi My current best understanding is that a balanced life is a small-batch-size life. If thats the case, Im getting there. Further repositioning This coming Monday, Ill be switching to one of these weird new MacBook Pros with the row of non-clicky touchscreen keys. If my current computer survives till then, thatll be one smooth step in a series of transitions. (In other news, Bekki defends her dissertation that day.) The following Monday, Ill be starting my next project, a mostly-remote gig pairing in Python to deliver software for a client while encouraging and supporting growth in my Pillar teammates. Ill be in Des Moines every so often if youre there andor have recommendations for me, Id love to hear from you. The Monday after that, well pack up a few things the movers havent already taken away, and our time in Indiana will come to an end. Were headed back to the New York area to live near family and friends. No resolutions, yes intentions For 2017, I declare my intentions to: Continue to improve my health and otherwise attend to my own needs Help more people understand what software development work is like Help more people feel heard I hope to see and hear you along the way. January 04, 2017 So over the holidays, I managed to get in some good quality family time and find some time to work on some Open Source stuff. I meant to work mainly on dhcpcd. but it turned out I spent most of my time working on NetBSD curses library so that Python Curses now works with it. Now, most people r. Adding and removing hardware components in operation is common in todays commoditized computing environments. This was not always the case - in the past century, one had to power down a machine in order to change network cards, harddisks or RAM. A major step towards changing a systems configuration at runtime for customers came with USB, but thats not where it ends - other systems like PCI support hotplugging as well. Another area where changing of the systems configuration is the amount of Ramdom Access Memory (RAM) of a system. Usually fixed, this is determined at system start time, and then managed by the operating systems memory managent system. But esp. with todays virtualized hardware systems, even the amount of RAM assigned to a system can easily be changed. For example a VM can be assigned more RAM when needed, without even rebooting the system, leading to increased system performance without introducing swappingpaging overhead. Of course this required support from the operating system and its memory management subsystem. For NetBSD, the UVM virtual memory system was now changed to support this via the uvmhotplug(9) API, and a first user for this is the Xen balloon(4) driver. Quoting from the balloon(4) manpage. The balloon driver supports the memory ballooning operations offered in Xen environments. It allows shrinking or extending a domains available memory by passing pages between different domains. The uvmhotplug(9) manpage gives us more information on the UVM hotplug functionality: When the kernel is compiled with options UVMHOTPLUG, memory segments are handled in a dynamic data structure (rbtree(3)) com - pared to a static array when not. This enables kernel code to add or remove information about memory segments at any point after boot - thus hotplug. To answer more questions for portmasters who want to change their ports, Cherry G. Mathew has now posted a uvmhotplug(9) port masters FAQ. It covers questions on the background, affected files, and needed changes. For more information on UVM, see Charles Chuck Cranors PhD disertation on Design and Implementation of UVM (PDF) as well as his Usenix talk on the UVM Virtual Memory System (PS). There is also plenty of information available on Xen ballooning - check it out and share your experiences on NetBSDs port-xen mailing list December 29, 2016 My brother got me some very tasty presents for Christmas (and my up-coming Birthday) . namely the GIGABYTE BRIX J1900 and a Samsung EVO 750 250G. Santa also brought me 8G of Crucial memory. Putting them all together is a nice new machine to install NetBSD Xen. The key part is this is a low. December 22, 2016 After my last blog postings on the NetBSD scheduler. some time went by. What has happened that the code to handle process migration was rewritten to give more knobs for tuning, and some testing was done. The initial problem state in PR kern51615 is solved by the code. To reach a wider audience and get more testing, the code was committed to NetBSD-current today. Now, two things remain to be seen: More testing . This best involved situations that compare the systems behaviour without and with the patch. Situations to test include pure computation jobs that involve multiple parallel processes a mix of CPU-crunching and inputoutput, again on a number of concurrent processes full build. sh examples If you have time and an interesting set of numbers, please feel free to let us know on tech-kern.. Documentation . There is already a number of undocumented sysctls under kern. sched, which was now extended by one more, averageweight. While its obvious to add the knob from the formula, testing it under various real-life conditions and see how things change is left to be determined by a PhD thesis or two - be sure to drop us your patches for srcsharemanman7sysctl.7 if you can come up with a comprehensible description of all the scheduler sysctls So just now when you thought there is no more research to be done in scheduling algorithms, here is your chance to fame and glory. -)December 17, 2016 How can I activate Keyboard Latin American on NetBSD Because when I am installing I never saw the Latin American keyboard, only Spanish. December 09, 2016 Where can I find and install an AR9271 driver for the latest NetBSD The target machine does not have Internet access and I need to setup the WiFi dongle first. UPDATE . wpasupplicant was already written, but I didnt see my device. When I plug in the dongle its shown as: ifconfig shows only re0 and lo0 interfaces. UPDATE . I saw on some Linux forums that the dongle uses an Atheros chip, but I checked in Windows and see Ralink. The ral driver is also integrated in NetBSD, but the situation doesnt change - I see no ra device in dmesg. boot. December 08, 2016 So, Ive installed NetBSD 7 and device shown again as ugen (ugein, lol). Then Im installed FreeBSD 10.2 and ugen again. usbconfig gives me ugen4.3: ltproduct 0x7601 vendor 0x148fgt at usbus4, cfg0 mdHOST spdHIGH(480Mbps) pwrON (90ma) So, whats next Buying new dongle is a last thing, which Ill make. UPD: NDIS driver not works. December 07, 2016 At Agile Testing Days. I facilitated a workshop called DevOps Dojo. We role-played Dev and Ops developing and operating a production system, then figured out how to do it better together. Youre welcome to use the workshop materials for any purpose, including your own workshop. If you do, Id love to hear about it. Some firsts Ive spoken at several instances of pkgsrcCon (including twice in nearby Berlin ), but thats more like a hackathon with some talks. Agile Testing Days was a proper conference . with hundreds of people and plenty of conferring. If someone asks whether Im an international speaker, or claims I am one, I now wont feel terribly uncomfortable going along with it. What I expected from many previous Lean Coffees: Id have to control myself to not say all the ideas and suggestions that come to mind. What happened at this Lean Coffee: It was very easy to listen, because I didnt have many ideas or suggestions, because the topics came from people who were mostly testers. Conclusions I immediately drew: Come to think of it, I have not played every role on a team. I dont know what its like to be a tester. Maybe my guesses about what its like are less wrong than some others, but theyre still gonna be wrong. This is evidently my first conference thats more testing than Agile . Cool I bet I can learn a lot here. Thanks to Troy Magennis. Markus Grtner. and Cat Swetel. I decided to try a new idea and spend a few slides drawing attention to the existence and purpose of Agile Testing Days Code of Conduct. I cant tell yet how much good this did, but it took so little time that Ill keep trying it in future conference presentations and workshops. Some nexts My next gig will be remote coaching, centered around what we notice as were pair programming and delivering working software. Ive done plenty of coaching and plenty of remote work. but not usually at the same time. Thanks to Lean Coffee with folks like Janet and Alex Schladebeck. I got some good advice on being a more effective influencer when it takes more intention and effort to have face-to-face interactions. Alex: For a personal connection, start meetings by unloading your baggage whatevers on your mind today that might be dividing your attention and inviting others to unload theirs. (Ideally, establish this practice in person first.) Janet: Ask questions that help people recognize their own situation. (Helping people orient themselves in their problem spaces is one of my go-to strengths. Im ready to be leaning harder on it.) As I learn about remote coaching, I expect to write things down at Shape My Work. a wiki about distributed Agile that Alex Harms and I created. Youll notice it has a Code of Conduct. If it makes good sense to you, wed love to learn what youve learned as a remote Agilist. I found Agile Testing Days to be a lovingly organized and carefully tuned mix of coffee breaks, efficiency, flexibility, and whimsy. The love and whimsy shone through. Im honored to have been part of it, and I sure as heck hope to be back next year. Wed be back next year anyway we visit family in Germany every December. Someday we might choose to live near them for a while. It occurs to me that having participated in Agile Testing Days might well have been an early investment in that option, and the thought pleases me. (As does the thought of hopping on a train to participate again.) Im in Europe through Christmas. I consult, coach. and train. Do you know of anyone who could use a day or three of my services One aspect of being a tester I do identify with is being frequently challenged to explain their discipline or justify their decisions to people who dont know what the work is like (and might not recognize the impact of their not knowing). In that regard, I wonder how helpful Agile in 3 Minutes is for testers. Lets say I could be so lucky as to have a few guest episodes about testing. Who would be the first few people youd want to hear from Who has a way with words and ideas, knows the work, and can speak to it in their unique voice to help the rest of us understand a bit better December 01, 2016 November 24, 2016 Interesting news come in via slashdot: Apple Releases macOS 10.12 Sierra Open Source Darwin Code. Apple has released the open source Darwin code for macOS 10.12 Sierra. The code, located on Apples open source website, can be accessed via direct link now, although it doesnt yet appear on the sites home page. The release builds on a long-standing library of open source code that dates all the way back to OS X 10.0. There, youll also find the Open Source Reference Library, developer tools, along with iOS and OS X Server resources. The lowest layers of macOS, including the kernel, BSD portions, and drivers are based mainly on open source technologies, collectively called Darwin. As such, Apple provides download links to the latest versions of these technologies for the open source community to learn and to use. This may not only be of interest to the OpenDarwin folks (or rather their successors in PureDarwin ) but more investigation not only on the code itself, but also the license it is released under is neccessary to learn if anything can be gained back for NetBSD. Why back As you may or may not remember, mac OS includes some parts of NetBSD (besides lots of FreeBSD, probably some OpenBSD, much other Open Source software and sure a big lot of Apples own code). My first job was in Operations. When I got to be a Developer, I promised myself Id remember how to be good to Ops. Ive sometimes succeeded. And when Ive been effective, its been in part due to my firsthand knowledge of both roles. DevOps is two things (hint: theyre not Dev and Ops) Part of what people mean when they say DevOps is automation. Once a system or service is in operation, it becomes more important to engineer its tendencies toward staying in operation. Applying disciplines from software development can help. These words are brought to you by a Unix server I operate. I rely on it to serve this website, those of a few friends, and a tiny podcast of some repute. Oh yeah, and my email. It has become rather important to me that these services tend to stay operational. One way I improve my chances is to simplify whats already there . If it hurts, do it more often Another way is to update my installed third-party software once a week. This introduces two pleasant tendencies: its much Less likely, at any given time, that Im running something dangerously outdated More likely, when an urgent fix is needed, that Ill have my wits about me to do it right Updating software every week also makes two strong assumptions about safety (see Modern Agiles Make Safety a Prerequisite): that I can quickly and easily Roll back to the previous versions Build and install new versions Since Ive been leaning hard on these assumptions, Ive invested in making them more true. The initial investment was to figure out how to configure pkgsrc to build a complete set of binary packages that could be installed at the same time as another complete set. My hypothesis was that then, with predictable and few side effects, I could select the active software set by moving a symbolic link . It worked. On my PowerPC Mac mini. the best-case upgrade scenario went from half an hours downtime (bring down services, uninstall old packages, install new packages, bring up services) to less than a minute (install new packages, bring down services, move symlink, bring up services, delete old packages after a while). The worst case went from over an hour to maybe a couple of minutes. Until it hurts enough less I liked the payoff on that investment a lot . Ive been adding incremental enhancements ever since. I used to do builds directly on the server: in place for low-risk leaf packages, as a separate full batch otherwise. It was straightforward to do, and I was happy to accept an occasional reduction in responsiveness in exchange for the results. After the Mac mini died. I moved to a hosted Virtual Private Server that was much easier to mimic. So I took the job offline to a local VirtualBox running the same release and architecture of NetBSD (32-bit i386 to begin with, 64-bit amd64 now, both under Xen ). The local job ran faster by some hours (I forget how many), during which the server continued devoting all its IO and CPU bandwidth to its full-time responsibilities. Last time I went and improved something was to fully automate the building and uploading, leaving myself a documented sequence of manual installation steps. Yesterday I extended that shell script to generate another shell script thats uploaded along with the packages. When the uploads done, theres one manual step: run the install script. If you can read these words, it works. DevOps is still two things Applying Dev concepts to the Ops domain is one aspect. When Im acting alone as both Dev and Ops, as in the above example, Ive demonstrated only that one aspect. The other, bigger half is collaboration across disciplines and roles. I find it takes some not-tremendously-useful effort to distinguish this aspect of DevOps from BDD or from anything else that looks like healthy cross-functional teamwork. Its the healthy cross-functional teamwork Im after. There are lots of places to start having more of that. If your teams context suggests to you that DevOps would be a fine place to start, go after it Find ways for Dev and Ops to be learning together and delivering together. Thats the whole deal. Heres another deal Two weeks from today, at Agile Testing Days in Potsdam, Germany, Im running a hands-on DevOps collaboration workshop. Can you join us Its not too late, and you can save 10 off the price of the conference ticket. Just provide my discount code when you register. Id love to see you there. November 22, 2016 According to NetBSDs wiki I can use pkgadd - uu to upgrade packages. However, when I attempt to use pkgadd - uu it results in an error. Ive tried to parse the pkgadd man page but I cant tell what the command it to update everything. I cant use pkgchk because its not installed, and I cant get the package system to install it: What is the secret command to get the OS to update everything Please forgive my ignorance with this question. I only have NetBSD systems for testing software. It gets used a few times a year, and I dont know much about it otherwise. October 27, 2016 A LAN has been set up with IPSubnet mask 192.48.1.0255.255.255.224 What is the maximum number of machines that can be set up in this LAN and why (This comes under class C network so the maximum would be 255 or less - correct me if im wrong) Suresh - email160protected sends a mail to my friend Rahul - email160protected with these three files as separate attachments as below - march-reports. ppt - Powerpoint file of size 256 KB. - locations. rar - Rar archive file of size 460 KB - me-snap. tiff - Tiff picture file of size 2970 KB. a) What is the size of the outgoing mail including mail headers b) What is the outgoing mail size if all the three files are archived as one single. rar file and sent out as one single attachment c) Show the MIME based mail structure of the outgoing mail. Show the NetBSD based C code for sending a text message Hello. This works to a remote server running on IP 122.250.110.14 on port 5050 and getting back an acknowlegement. October 10, 2016 The FreeBSD Release Engineering Team is pleased to announce the availability of FreeBSD 11.0-RELEASE. This is the first release of the stable11 branch. Some of the highlights: OpenSSH DSA key generation has been disabled by default. It is important to update OpenSSH keys prior to upgrading. Additionally, Protocol 1 support has been removed. OpenSSH has been updated to 7.2p2. Wireless support for 802.11n has been added. By default, the ifconfig(8) utility will set the default regulatory domain to FCC on wireless interfaces. As a result, newly created wireless interfaces with default settings will have less chance to violate country-specific regulations. The svnlite(1) utility has been updated to version 1.9.4. The libblacklist(3) library and applications have been ported from the NetBSD Project. Support for the AArch64 (arm64) architecture has been added. Native graphics support has been added to the bhyve(8) hypervisor. Broader wireless network driver support has been added. The release notes provide the in-depth look at the new release, and you can get it from the download page. September 14, 2016 Many programming guides recommend to begin scripts with the usrbinenv shebang in order to to automatically locate the necessary interpreter. For example, for a Python script you would use usrbinenv python. and then the saying goes, the script would just work on any machine with Python installed. The reason for this recommendation is that usrbinenv python will search the PATH for a program called python and execute the first one found and that usually works fine on ones own machine . Unfortunately, this advice is plagued with problems and assuming it will work is wishful thinking. Let me elaborate. Ill use Python below for illustration purposes but the following applies equally to any other interpreted language. i) The first problem is that using usrbinenv lets you find an interpreter but not necessarily the correct interpreter . In our example above, we told the system to look for an interpreter called python but we did not say anything about the compatible versions. Did you want Python 2.x or 3.x Or maybe exactly 2.7 Or at least 3.2 You cant tell right So the the computer cant tell either regardless, the script will probably run with whichever version happens to be called python which could be any thanks to the alternatives system. The danger is that, if the version is mismatched, the script will fail and the failure can manifest itself at a much later stage (e. g. a syntax error in an infrequent code path) under obscure circumstances. ii) The second problem, assuming you ignore the version problem above because your script is compatible with all possible versions (hah), is that you may pick up an interpreter that does not have all prerequisite dependencies installed . Say your script decides to import a bunch of third-party modules: where are those modules located Typically, the modules exist in a centralized repository that is specific to the interpreter installation (e. g. a. libpython2.7site-packages directory that lives alongside the interpreter binary). So maybe your program found a Python 2.7 under usrlocalbin but in reality you needed it to find the one in usrbin because thats where all your Python modules are. If that happens, youll receive an obscure error that doesnt properly describe the exact cause of the problem you got. iii) The third problem, assuming your script is portable to all versions (hah again) and that you dont need any modules (really), is that you are assuming that the interpreter is available via a specific name . Unfortunately, the name of the interpreter can vary. For example: pkgsrc installs all python binaries with explicitly-versioned names (e. g. python2.7 and python3.0 ) to avoid ambiguity, and no python symlink is created by default which means your script wont run at all even when Python is seemingly installed. iv) The fourth problem is that you cannot pass flags to the interpreter . The shebang line is intended to contain the name of the interpreter plus a single argument to it. Using usrbinenv as the interpreter name consumes the first slot and the name of the interpreter consumes the second, so there is no room to pass additional flags to the program. What happens with the rest of the arguments is platform-dependent: they may be all passed as a single string to env or they may be tokenized as individual arguments. This is not a huge deal though: one argument for flags is too restricted anyway and you can usually set up the interpreter later from within the script. v) The fifth and worst problem is that your script is at the mercy of the users environment configuration . If the user has a misconfigured PATH. your script will mysteriously fail at run time in ways that you cannot expect and in ways that may be very difficult to troubleshoot later on. I quote misconfigured because the problem here is very subtle. For example: I do have a shell configuration that I carry across many different machines and various operating systems such configuration has complex logic to determine a sane PATH regardless of the system Im in but this, in turn, means that the PATH can end up containing more than one version of the same program. This is fine for interactive shell use, but its not OK for any program to assume that my PATH will match their expectations. vi) The sixth and last problem is that a script prefixed with usrbinenv is not suitable to being installed . This is justified by all the other points illustrated above: once a program is installed on the system, it must behave deterministically no matter how it is invoked. More importantly, when you install a program, you do so under a set of assumptions gathered by a configure - like script or prespecified by a package manager. To ensure things work, the installed script must see the exact same environment that was specified at installation time. In particular, the script must point at the correct interpreter version and at the interpreter that has access to all package dependencies. So what to do All this considered, you may still use usrbinenv for the convenience of your own throwaway scripts (those that dont leave your machine) and also for documentation purposes and as a placeholder for a better default . For anything else, here are some possible alternatives to using this harmful shebang: Patch up the scripts during the build of your software to point to the specific chosen interpreter based on a setting the user provided at configure time or one that you detected automatically. Yes, this means you need make or similar for a simple script, but these are the realities of the environment theyll run under Rely on the packaging system do the patching, which is pretty much what pkgsrc does automatically (and I suppose pretty much any other packaging system out there). Just dont assume that the magic usrbinenv foo is sufficient or even correct for the final installed program. fx chatter: There is a myth that the original shebang prefix was so that the kernel could look for it as a 32-bit magic cookie at the beginning of an executable file. I actually believed this myth for a long time until today, as a couple of readers pointed me at The magic, details about the shebanghash-bang mechanism on various Unix flavours with interesting background that contradicts this. August 24, 2016 Im running NetBSD in a virtual machine. Documentation and explanations on how to use pkgsrc are scarce. Lets say I want to install vim for NetBSD. What would I type Do I need a URL Do I need a specific version Do I need to set up a directory for building the source of vim July 08, 2016 Here are some notes on installing and running NetBSDevbarm on the AllWinner A20 powered CubieBoard2. I bought this board a few weeks ago for its SATA capabilities, despite the fact that there are now cheaper boards with more powerful CPUs. Required steps for creating a bootable micro SD card are detailed on the NetBSD Wiki. and a NetBSD installation is required to run mkubootimage . I used an USB to TTL serial cable to connect to the board and create user accounts. Do not be afraid of serial, as it has in fact only advantages: there is no need to connect an USB keyboard nor an HDMI display, and it also brings back nice memories. Connecting using cu (from my OpenBSD machine) : Device name might be different when using cu on other operating systems. Adding a regular user in the wheel group : Adding a password to the newly created user and changing default shell to ksh : Installing and configuring pkgin : Finally, here is a dmesg for reference purposes : June 30, 2016 Ive been itching to go wireless on my office desk for sometime. The final wires to eradicate are from my Mac into a USB hub connected to two hard discs for backups. Years ago I had an Apple Time Capsule. The Time Capsule is an Airport Wi-Fi basestation with a hard disc for Macs to back up to using the Time Machine backup software. It was pretty solid kit for a couple of years. Under the hood, it runs NetBSD and as an aside, I have had a few beers with the guy who ported the operating system. The power supply decided to give up a very common fault apparently. I will clean the cables up. I promise. When I was on my travels and living in two places, I had hard discs in both locations. The Mac supports multiple discs for backups and I encrypted the backups in case the discs were stolen. But now Im in one home, I want to be able to move around the house with the Mac but still backup without having to go to the office. We are a two Mac house, so we need something more convenient. I already have a base station and I dont really want to shell out loads of money for an Apple one. There are several options to setup a Time Capsule equivalent. If you have a spare Mac, get a copy of Mac OS X Server. It will support Time Machine backups for multiple Macs and also supports quotas so that the size of the backups can be controlled. I dont have a spare stationary Mac. Anything that speaks Appletalk file sharing protocol reasonably well. Enter the Raspberry Pi. I have a Raspberry Pi 3 and within minutes one can install the Netatalk software. This has been available for years on Linux and implements the Apple file sharing protocols really well. With an external drive added, I was able to get a Time Machine backup working using this article . I could not use my existing backup drive as is. Linux will read and write Mac OS drives, but there is a bit of too-ing and fro-ing so it is best to start with a fresh native Linux filesystem. Even if you can get it to work with the Mac OS drive, it will not be able to use a Time Machine backup from a drive previously directly connected. Ive been using this setup for the last couple of weeks. I have not had to do a serious restore yet and I should caveat that I still have a hard drive I use directly into the machine just in case. The first rule of backups a file doesnt exist unless there are three copies on different physical media. (The Raspberry Pi is setup to be MiniDLNA server. It will stream media to Xboxs and other media players.) June 12, 2016 I installed sudo on NetBSD 7.0 using pkg. I copied usrpkgetcsudoers to etcsudoers because the docs say etcsudoers and possibly etcsudoers. local is used. I uncommented the line wheel ALL(ALL) ALL. I then added myself to the wheel group. I verified I am in wheel with groups. I then logged off and then back on. When I attempt to run sudo ltcommandgt. I get the standard: What is wrong with my sudo installation, and how can I fix it May 31, 2016 A brief description of playing around with SunOS 4.1.4, which was the last version of SunOS to be based on BSD. File Info: 17Min, 8Mb Ogg Link: archive. orgdownloadbsdtalk265bsdtalk265.ogg April 30, 2016 Playing around with the gopher protocol. Description of gopher from the 1995 book Students Guide to the Internet by David Clark. Also, at the end of the episode is audio from an interview with Mark McCahilll and Farhad Anklesaria that can be found at youtubewatchvoR76UI7aTvs Check out gopher. floodgapgopher File Info: 27 Min, 13 MB. Ogg Link:archive. orgdownloadbsdtalk264bsdtalk264.ogg March 23, 2016 This episode is brought to you by ftp, the Internet file transfer program, which first appeared in 4.2BSD. An interview with the hosts of the Garbage Podcast, joshua stein and Brandon Mercer. You can find their podcast at garbage. fm File Info: 17Min, 8MB. Ogg Link: archive. orgdownloadbsdtalk263bsdtalk263.ogg via these fine people and places: This planet is operated by Kimmo Suominen. Hosting provided by Global Wire Oy .

No comments:

Post a Comment