Besides deep learning, many other solutions exists for computers to discover informal knowledge by itself. How many do we know and how they are catergorized?
Until now, is there any new knowledge have been discovered by deep learning?
For image-processing based AIs such as object detection: is the image a good source for feature extracting? (It works for human but we know it is not optimal since our eyes and brain usually make mistakes on this) Is there any other sources we can get more useful features for the same purpose?
In which way does Deep Neural Network looks similar to human Brain?
In which way DNN does not works similar to human brain?
What are the algorithms in deep learning?
How much cost to train a neural network?
Backpropagation calculus
Reference 1 Artificial Intelligence Early stage of AI: problems can be described by a list of formal, mathematical rules: Relatively easy for computers but hard for humans. Now Challenging Task: that are easy for people to perform but hard for people to describe formally. ==> LLM: but not impossible to describe formally, yes??? Deep Learning: a solution to solve tasks that easy for people but hard to describe formally. This
References: Meet Horovod: Uber’s Open Source Distributed Deep Learning Framework for TensorFlow Motivation Problems in the standard distributed TensorFlow technique: not always clear which code modifications needed to be made to distribute the model training code; Many new concepts introduced hard-to-diagnose bugs that slowed training. The standard distributed TensorFlow package introduces many new concepts: workers, parameter servers, tf.Server(), tf.ClusterSpec(), tf.train.SyncReplicasOptimizer(), and tf.train.replicas_device_setter() to name a few.
References: reference More 2018 Horovod References: Horovod: fast and easy distributed deep learning in TensorFlow. code Overview Uber: deep learning for self-driving, trip forecasting, fraud prevention. Michelangelo[c3], an internal ML-as-a-service platform, deploying ML systems at scale. Horovod, an open-source component of Michelangelo’s deep learning toolkit which makes it easier to start – and speed up – distributed deep learning projects with TensorFlow. Motivation As datasets grew, so did the training times, which sometimes took a week or longer to complete.
Q&A How many ways to split training process into multiple steps, where each step can be done at different place. Reference 1 Intro - Perceptron Perceptron, 1985. Activation Functions: non-linearity two input, 1 output. multi output. Traning init weight. –> wrong prediction –> big loss multiple data –> loss of all predictions Loss function Binary Cross Entropy Loss: for output to be 0 or 1. Mean Squared Error Loss: for
Q&A/Top Wonderings/Todones Can docker migration assist the fedarated learning? Reference 1 reference ↩
Q&A What is tensor? see tensors Reference 1 TensorFlow Tensors ↩
If you could revise
the fundmental principles of
computer system design
to improve security...
... what would you change?