[ot][crazy][personal]
goal: rational decision making and action concern: cognitive spasms stimulating desynchronization of relevant thoughts note: i have some eeg data note: i’ve found a coping strategy that can have repeated success for sone things, involving kind of ritualizing and repeating behaviors note: my experiences depend on what i experience; they’re networks of triggers, not just random
journal: i’m thinking, when i pursue pretty much any task, it stimulates these desynchronization spasms. i’ve thoroughly learned to influence myself to struggle. given the mcboss story, i could probably somehow record data on the spasms, and have already to some degree. data: notes, video, eeg. what’s missing is organizing that in one place. stimulating it now, writing that. regarding organizing, a clear concept of what to do with the data could help with making a little organization some day. i’m thinking it would be nice to have timestamps on logs that indicate difficult events. such timestamps could possibly be autogenerated. i abandon tasks, so record when i stop them. i also have an acceleromoter on my eeg headband, so there are likely events of jerking that could be classified pretty easily. thoughts: - autogeneration of timestamps from task abandonment - autogeneration of timestamps from jerk events then, what to do with these timestamps?
journal: maybe train a model to predict them. then it can warn me when i’m getting near, in a biofeedback way. to do the warning it would need streamed data, not archived data.
conclusion: regarding ideas near neurofeedback or eeg, a useful next step is: - providing a way to access eeg data in a streaming manner this is also useful: - converting eeg data to any normative format
goal parts include: - timestamps on data - an algorithm that predicts the timestamps so, accessing archived data is more important to start, for producing the algorithm. given my response to such activity, the logging would ideally continue during the algorithm production. given my loss of data, this is why streaming can be valuable. i am still losing my records. - get data more reliable - timestamp any associated event of value, such as high acceleration - train a model to glean statistics prior to the event we’ll probably then try to repeat it to bend things more toward a goal of establishing some reliable behavior. finding friends or professionals who can remember and report back on my behavior is also quite helpful.
blockchain ur brainwaves, karl. it is incredibly hard to find good help with ur problem, and not very hard to train an algorithm to predict when it occurs. but you destroy your data !! blockchain ur brainwaves. blockchain ur brainwaves!
i did very barebones of this, temporary, here is a ditem of an lsl xdf file {ditem: [7X8rCKxBGSu9vCJM6Jp1ZMNzZUjhklAwUpv1LgDywXM], min_block: [1075417, EIAKwDgzNHhNQvOpcTlgWOkYQh-fh8QcG0EsoGXNNuJRcMfvvi2V75AQTHrs54Sc], api_timestamp: 1670776690402} have not tried it with download.py it was hard to figure out i needed to poke holes in firewalld for lo<->lo multicast packets to be received by more than one socket. until i did this it would only see one channel group. i would like to set up a higher quality device, but it's probably more important to stabilize this and make it useful. i'm happy i'm using lsl which is one of the very normative and cross-platform tools. i also briefly tested the xdf format with streams that get terminated early and it loaded my example data fine using the official tools; thorough test pending
participants (1)
-
Undescribed Horrific Abuse, One Victim & Survivor of Many