WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

(post is archived)

[–] 0 pt (edited )

So I did talk earlier about how I wanted to play around with a different way of doing neural nets. The idea is to have each of the hidden neurons layers only be one wide and have each one takes input from all prior neurons instead of just the prior layer. Just to be different on this one I used a reservable square root as the "activation function".

Here is the data if anyone wants a plot they can rotate: https://jssocial.pw/ppkey/fget/x0x7/upload/xorplot.json

Code to plot:

import json
f = open('xorplot.json')
d = json.load(f)
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt

fig = plt.figure();
ax = fig.add_subplot(111,projection='3d')

ax.scatter(d['x'],d['y'],d['z'],c='r',marker='o')
ax.set_xlabel('x')
ax.set_xlabel('y')
ax.set_xlabel('z')
plt.show()