Let's discuss statistical parsing /g/
Ultimately, my goal is to be able to distinguish the different parts of any given signal / data structure ... whatever.
I don't want to know what it means, I just want to know there are distinct symbols or patterns. but I also want to be able to capture "containers"
or things that wrap around discrete chunks in the signal.
Let me give you 2 instances:
>Natural language:
John told Alex that Christine said, "Hey, this is a shitty example sentence"
>Some program
function(param, arg2){return param - arg2;}
In the natural language example, the wrapping symbols would be the pauses(vocal)/quotes(written) that wrap and cluster the quoted words into 1 item.
Just like the function has 2 sets of symbols to "wrap" and contain other information.
Again, I don't care about "meaning", just distinction.
I believe this is possible. I believe there are common behaviors of "wrappers" and key symbols. key symbols I think can be found with some type of statistcal approach. where you count the occurance of certain combinations and then use a score to determine if it's worth the risk to be considered a keyword or whatever.
I don't know how you would do this for wrappers, unless they can literally be deducced by everything that isn't a "key symbol".
I'm not a very disciplined programmer so a lot of my theories are fuzzy.
I was thinking that once key relationships are found (and it's likely that there arren't any more to be found) then you can cluster all the items in the 1 dimensional signal
>>62096898
>Sometimes you have to stop thinking so much and just go to the designated shitting street
My heart tells me to find a few friends among Muslim migrants, and go on a purge of faggots and trans degenerates. I must pursue my heart.
Also Python is the greatest language of all.