[Settings] [Home] [Catalog] [Search] [Private messages] [Admin]
Emotes
Kaomoji
Emoji
BBCode
(for deletion)
  • Allowed file types are: gif, jpg, jpeg, png, bmp, swf, webm, mp4
  • Maximum file size allowed is 50000 KB.
  • Images greater than 250 * 250 pixels will be thumbnailed.
  • 25 unique users in the last 10 minutes (including lurkers)





Want your banner here? Click here to submit yours!

File: lol.mp4
(4.72 MB, 720x1280)
4951321
/j/ 💗 /b/
1 posts omitted. Click Reply to view.
>>
*sits on ur face*
(   ´¬`      )
>>
Looks more like /j/ olev /sw/ (=゚ω゚)ノ
>>
I hope her paycheck was as fat as he
>>
*fards*
>>
LOL

AAAALLLL ABOOAAARD THE partybus
>>
No. I don't wanna. :nyaoo2:
>>
Party Bus: because we won't all fit in the Van.
>>
File: 1774561556636.gif
(357 KB, 400x300)
366118
LET'S GOOOOOO (@^▽^)/

2456148
Imagine instead of scientists using their resources and knowledge to research gay things such as curing cancer and whatnot, they used those things for something that actually matters like making irl oppai lolis. Thoughts?
2 posts omitted. Click Reply to view.
>>
true rorycons will revolt. ヽ(`Д´)ノ
>>
>>180297
APPROVE:iyahoo:
>>
oppai loli is a sin and those who enjoy should burn!
>>
>>180282
>But we can't get lolis without oppais
whut? O_o
>>
>>180388
It reads "without consequences". And maybe, "without the burden of raising our own".

would you kiss a boy if you were a boy?
5 posts omitted. Click Reply to view.
>>
>>180404
now you
>>
If they were at least as cute and effeminate as black hair-kun, yes
>>
>>180407
how would you feel about young leonardo dicaprio?
>>
>>180409
i prefer old Leo
>>
>>180412
that is understandable, although i personally think he peaked in his early 20s in terms of attractiveness

u know who live in this house?
>>
me. ( ´,_ゝ`)
>>
are you an loli oppai luver?
>>
Sam Altman's yay! >"<

KONATA HAZ GUN
23 posts omitted. Click Reply to view.
>>
>>180166
a lot of what he wrote down is bullshit :glare2:
>>
police(?) cosplay ver.
>>
キタ━━━(゚∀゚)━━━!!
>>
>>180198
so they were able to get her the konata cosplay in the end... :astonish:
>>
>>180254
shes so cute :drool: her voice too. looks like mayushi a bit

Ow.... mai pooper
9 posts omitted. Click Reply to view.
>>
>>180137
>futabas oekaki board is still super active
theres like five oekaki boards and the only active one has probably seven regular artists who have been using the same boring motifs for decades
>>
i loev oekaki (´人`)
>>
>>180358
this is the image that kuz sent and said that its how kaguya looked IRL
>>
kuz on teh left
>>
>>180361
me on teh rite

sundae on a sunday :mona2:
>>
i will have that without teh sundane. Just teh neko please.

i wish i could sleep with myself! sometimes in bed i stare at my selfies and imagine it was someone else.... but nobody i know is good enough to be with me, so now i'm all alone
>>
when you fap its like giving yourself a handjob
>>
>>180299
ARE YOU STRAIGHT ENOUGH TO JACK IT TO YOUR OWN NUDES?
:hardgay:
>>
>>180300
I've never actually masturbated like that, I just kinda squeeze my bulge through my pants to make it feel good ヽ(´ー`)ノ. I think posting this last night triggered me dreaming about letting my ex-gf touch me again, which felt pretty bad

Do you have any ball-jointed dolls (or any poseable dolls for that matter)? What scenes/activities are they portraying? Do you dress them up as well?
>>
I'm a big fan, but I only have one doll, and she just sits there like a decoration. :sweat2:
>>
3810891
hina

horny gurls :love:
>>
Disgusting. Put a shirt on, each of you.
>>
I'M HORNY, HORNY HORNY HORNY
>>
I bet you look really good in that suit.
>>
i think it would be really cool if we all hugged while naked

7082768
thanks to the latest developments in technology, science has made it possible for robots to develop asperger autism!
3 posts omitted. Click Reply to view.
>>
they are perfect...
>>
ive seen these pasokons 2 or 3 years ago :nyaoo: wonderful, you could summon them by ringing a bell
>>
https://www.masiro.cafe/
https://www.youtube.com/@MaSiRoProject
Those robots are playable in suzuka-city, mie, JAPAN.
You can touch there hands.

Suzuka city as known as the F-1 racing circuit course.
>>
>>180307
It's a single guy's project? (;゚∀゚)
Some NEET somewhere in Japan probably has an advanced un-published robo waifu...
>>
Cute, when is my first date?

キタ━━━(゚∀゚)━━━!!
>>
how do japanese cats meow?
>>
Now that you mention it, can a cat from one country understand the meow of a cat from another? (=゚ω゚)ノ
>>
>>180263
why u no mouser yet
>>
>how do japanese cats meow?
Western cats say "meow", Japanese nekos say "nyaa"

In the last post about my vocal synthesis project, I talked about implementing the Wide-Band Voice Pulse Modeling algorithm. Since then, I've actually done some original research of my own and have devised what I believe to be three minor improvements to the algorithm.

I implemented the Wide-Band Voice Pulse Modeling algorithm (from Dr. Jordi Bonada's PhD thesis: https://www.tdx.cat/bitstream/handle/10803/7555/tjbs.pdf) via the upsampling method (specifically, upsampling via a natural cubic spline). There are actually two methods proposed in that paper, the other being via periodization. There is actually a patent that pertains to WBVPM, but it only covers the periodization version (which is what they used for their results), so I have implemented the upsampling method instead. I have been able to validate the main results in that paper; specifically, its shape-invariance and lower residual when compared to other methods. Furthermore, I have devised three significant improvements to the algorithm - two of which are only possible because I used the spline approach, so in a sense it was good that I had to do it that way.

Of the three improvements, I have implemented the first two and shown their advantage of the original WBVPM algorithm. The resulting score has been obtained by taking the mean of the relative of residual level (i.e. the difference between the original and reconstructed signal; relative to the level original signal). I have done so on an audio sample that deliberately exhibits traits that were noted as negatively affecting the WBVPM algorithm's resulting quality. Notably, a low pitch voice with rapid and deep vibrato, transients, strong amplitude modulation, and a large portion of the sampling being between a voiced/unvoiced/voiced transition.

First I should note that my WBVPM implementation is currently far from optimal. The pitch estimation system (via the modified TWM algorithm) has not undergone testing and tuning of its parameters, and there are many variations of the TWM algorithm to consider. Additionally, I have not implemented unvoiced/voiced detection (because, as far as I can tell, it is not mentioned in Bonada's thesis; presumably it's in prior literature, but I have not researched it yet), so all the algorithms act as if they are always processing a voiced signal even when they are not.

RESILIENT BORDER INTERPOLATION IN SYNTHESIS - When I first implemented the synthesis step for WBVPM, it was late at night and I was tired. I wanted a quick result before I went to bed and didn't understand the wording of the description of the synthesis step in WBVPM. As such, my original implementation differed significantly. Instead of using the overlap-and-add, it instead, for each sample, found the closest voice pulse and determined its value for that time, taking advantage of the spline that was generated for downsampling and using the periodic nature of the pulse to extend it when the sample was beyond its domain (i.e. the opposite of overlapping). This approach lead to high-frequency crackling artifacts due to discontinuities between the voice pulse boundaries.

The following day, I properly understood the synthesis approach and rewrote the synthesis code. Interestingly, this actually gave worse overall results. While the high frequency artifacts were gone, there were now large low frequency artifacts that appeared as large modulations in the time-domain. I eventually tracked this down to being a bug in my implementation of the MFPA algorithm that sometimes resulted in massive errors of up to 1.5 radians. I fixed this bug and the reconstruction synthesis no longer had significant artifacts, but I thought it was interesting that my approach, despite having the discontinuity issue, was more resilient to errors in the MFPA estimation. I began thinking if the two approaches could be combined to create an even better approach.

I was thinking about why the modulation occurred in the case of the overlap-and-add method. Thinking about it, when the fundamental frequency is stationary and the MFPA onsets are perfect, the trapezoidal window function is equivalent to a weighted average between two adjacent voice pulses over the duration of twice the border interpolation size. However, when the MFPA onsets are inaccurate, or even just when the fundamental frequency is non-stationary, this is no longer true. Even worse, thinking about it from the weighted average point of view, the sum isn't necessarily one everywhere anymore, hence the modulation.

I then devised a method that would not result in modulation. This method works by first synthesizing the 'inner' portion of each pulse (by 'inner', I mean starting at the end of the border interpolation at the start, and ending before the start of the next border interpolation towards the end of the pulse). Then, for the gaps in between each pulse, we calculate each sample value by a weighted average of two values. These are values are the values of each voice pulse at that time. Since the gap extends beyond the boundaries of each voice pulse, we use the periodic nature of the pulses to compute the effective position in the voice pulse by taking the position modulo the period of the fundamental frequency at that voice pulse. The fundamental frequencies of each of the voice pulses may differ, so we actually change step in time linearly. At each end of the gap, the step size for the voice pulse it is next to is one sample in time, while the step for the former voice pulse is the equivalent of one sample in the latter voice pulse relative to the former's fundamental frequency (e.g. if the second voice pulse has twice the fundamental frequency as the first; the step size for the first would be 2 and tep size for the second would be 1, at the end of the gap). For the start of the gap, it is the same except relative to the first pulse having a step of 1. In between, we the step size interpolate linearly.

It is worth noting that in the ideal case where the onsets are exactly correct and the fundamental frequency is stationary, the result of this approach is the same as using the trapezoidal window.

FREQUENCY WARP-CORRECTION - As noted in Bonada's thesis, WBVPM assumes that the fundamental frequency is stationary within each pulse, however this is not actually true, and that the artifacts from this are particularly apparent for low fundamental frequency voice signals, because each period of the signal is longer in time and thus the internal state of the system has more time to change.

One of the changes that can happen over time is modulation of the fundamental frequency. This can be thought of actually as a time-domain remapping function that distorts each voice pulse according to a continuous fundamental frequency trajectory.

I discovered a way of correcting this, largely by accident as I was thinking about solving the modulation issue I discussed in the previous section. I was thinking about how I proposed changing the step size linearly in the gaps between the 'inner' pulses. I was thinking, we have a discrete sequence of fundamental frequencies. So, what if instead of changing the step size linearly, we instead created a spline from the fundamental frequencies and instead changed the step size based on that? Then I realized that we could also use this for the whole voice pulses and just sample everything with a step size based on the fundamental frequency trajectory. I then realized that this would actually act like the distortion from changing parameters within each voice pulse, at least in the synthesis stage. Further more, since we are already computing splines for each voice pulse to downsample it, this comes at very little additional computational cost.

However, the voice pulses in analysis are already distorted. So then I already we can do the inverse resampling in the upsampling stage of WBVPM analysis to correct for non-stationary frequency, then it is redistorted according to the transformed fundamental frequency trajectory in the synthesis stage. This makes this method effectively invariant to modulations in fundamental frequency, so long as the modulation is less than the fundamental frequency and it is modeled well by the spline, which should be the case for modulation period is at least several voice pulses.

Comment too long, view post No.180302 to see the full comment.
>>
The technique works as follows:
a) First, for each voice pulse, and then for each harmonic of its spectra, we compute a spline based on the values of the amplitude of that harmonic in the voice pulse as well as a fixed number of surrounding voice pulses.
b) Since the time delta between voice pulses can vary, we then resample each local harmonic spline with fixed steps in time.
c) We compute the fourier transform of these resampled local harmonic trajectories
d) We apply a low-pass and high-pass filter to separate it into low-frequency and high-frequency components.
e) We then apply the inverse fourier transform to each of these. We can then sample the low pass trajectory at the time of the voice pulse to get the amplitude value of the denoised harmonic for that voice pulse. The same can be done for the high pass trajectory to obtain a pseudo-pulse representing the residual. These residual voice pulses can then be synthesized using the WBVPM synthesis method to obtain a time-domain residual signal which can be processed separately from the main harmonic signal.

A significant source of error in this process presumably would come from the resampling step. This can be decreased by using a smaller time step, at an increased computational cost. However, the error could probably be greatly reduced by first calculating the difference between the original amplitudes and the amplitudes at the same times in a spline computed from the resampled harmonic spline before applying the band filters, this difference can later be added back to the low-pass amplitude trajectory.

The denoised harmonic phase can also be computed via the same method, using Bonada's method for unwrapping phase across both frequency and time. The residual phase can be calculated by taking difference of the original phase from the denoised phase and dividing it by the residual amplitude.

RESULTS:

I have tested these improvements and obtained the following results for the aforementioned audio sample:

Original WBVPM: -36.355dB
Warp-correction improvement only: -36.74595dB
Warp-correction & Resilient border interpolation in synthesis: -37.41177dB

More research is needed to properly evaluate these improvements across more samples with more variety, and to see if these techniques still result in improvements with more accurate pitch and MFPA estimation and with proper handling of unvoiced/voiced frames.

MAHORO!
MAHORO!
MAHORO!
MAHORO!
MAHORO!
4 posts omitted. Click Reply to view.
>>
>>180265
yes it is :angry: in a boring way at least :glare1:
>>
>>180259
wuts ur favorite loli/shota porno u would liek to share?
>>
>>180269
what's your badge number bud ( ´ω`)
>>
WHAT DO YOU MEAN STREAM IN TWO HOURS
I'M DRUNK AND READY NOW!!! ヽ(`Д´)ノ
>>
it's starting in a few minutes キタ━━━(・∀・)━━━!!

Want your banner here? Click here to submit yours!

Delete post: []
[0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212] [213] [214] [215] [216] [217] [218] [219] [220] [221] [222] [223] [224] [225] [226] [227] [228] [229] [230] [231] [232] [233] [234] [235] [236] [237] [238] [239] [240] [241] [242] [243] [244] [245] [246] [247] [248] [249] [250] [251] [252] [253] [254] [255] [256] [257] [258] [259] [260] [261] [262] [263] [264] [265] [266] [267] [268] [269] [270] [271] [272] [273] [274] [275] [276] [277] [278] [279] [280] [281] [282] [283] [284] [285] [286] [287] [288] [289] [290] [291] [292] [293] [294] [295] [296] [297] [298] [299] [300] [301] [302] [303] [304] [305] [306] [307] [308] [309] [310] [311] [312] [313] [314] [315] [316] [317] [318] [319] [320] [321] [322] [323] [324] [325] [326] [327] [328] [329] [330] [331] [332] [333] [334] [335] [336] [337] [338] [339] [340] [341] [342] [343] [344] [345] [346] [347] [348] [349] [350] [351] [352] [353] [354] [355] [356] [357] [358] [359] [360] [361] [362] [363] [364] [365] [366] [367] [368] [369] [370] [371] [372] [373] [374] [375] [376] [377] [378] [379] [380] [381] [382] [383] [384] [385] [386] [387] [388] [389] [390] [391] [392] [393] [394] [395] [396] [397] [398] [399] [400] [401] [402] [403] [404] [405] [406] [407] [408] [409] [410] [411] [412] [413] [414] [415] [416] [417] [418] [419] [420] [421] [422] [423] [424] [425] [426] [427] [428] [429] [430] [431] [432] [433] [434] [435] [436] [437] [438] [439] [440] [441] [442] [443] [444] [445] [446] [447] [448] [449] [450] [451] [452] [453] [454] [455] [456] [457] [458] [459] [460] [461] [462] [463] [464] [465] [466] [467] [468] [469] [470] [471] [472] [473] [474] [475] [476] [477] [478] [479] [480] [481] [482] [483] [484] [485]