vendredi 22 mars 2019






Bonjour j’ai passé beaucoup de temps à la recherche de travail et je suis encore à la recherche d’un travail voilà comment on se présente pour un entretien d’embauche.

Comment se présenter dans un entretien d’embauche
1-Présentez-vous !
--------
Je m'appelle '......' j'ai (âge) je suis d'origine/ville) J'ai un diplôme en .... et en parallèle je poursuis mes études en ... à ..... j'ai déjà passé des stages et j'aimerais bien rejoindre votre équipe comme(poste).
2-Pourquoi vous avez choisi ce métier ?
--------
D'une part (le côté subjectif) , je trouve que ce métier me passionne et il correspond à ma personnalité et à mes qualités, car j'aime exercer un travail qui demande l'esprit d'analyse, de concentration et de communication .D'autre part(le côté objectif) je trouve que (poste) est une fonction et un outil indispensable pour la gestion et croissance d'entreprise par Ex).
3-Pourquoi vous avez choisi notre société?
--------
-Car votre entreprise a une bonne réputation dans le secteur(bancaire touristique commercial...)Ainsi le profil que vous recherchez dans votre annonce correspond au mien.Alors je trouve que c'est dans votre société où je peux investir mes compétences.
4-Avez vous déjà passé des stages?
--------
Oui,j'ai passé des stages dans la société(nom des sté)
5-Qu'avez-vous appris pendant ces stages?
--------
Sur le plan technique, j'ai vu de près comment enregistrer la TVA...(juste ex)j'étais également chargé de .... Sur le plan relationnel, j'ai acquis pas mal de compétences comme le travail en groupe , la ponctualité et le respect de mes supérieurs et des personnes qui sont plus expérimentées que moi alors je peux dire que c'est grâce à ce stage que j'ai pu acquérir à la fois un savoir(connaissance), un savoir faire(les compétences pratiques) et un savoir être(la personnalité et le comportement).
6-Quelles sont vos qualité et vos défauts?
--------
-Concernant mes qualités, je trouve que je suis motivé et j'ai le sens d'analyse et de communication . Quant à mes défauts je crois que je suis trop organisé , un peu têtu , perfectionniste.
7-Préférez-vous travailler seul ou en groupe?
--------
En groupe car l'entreprise est une organisation donc chaque élément est un maillon qui complète la chaîne .Mais je préfère également tout seul surtout pour faire les tâches qui nécessitent l'analyse et concentration...
8-En cas du mal entendu avec votre supérieur, comment vous allez réagir?
--------
Je vais l'écouter lui expliquer mon point de vue du façon aimable et dans tous les cas je dois respecter sa vision car c'est lui qui prend les décisions.Et si par Ex j'ai commis une erreur je vais bien sur m'excuser.
9-En cas de malentendu avec un client comment vous allez réagir ?
--------
je vais l'écouter , lui expliquer la situation d'une façon aimable et je vais essayer de trouver un terrain d'entente si un client dépasse les limites je vais garder mon silence et contacter mon supérieur.
10-Etes vous prêt à vous déplacer ?
--------
si oui: oui, je suis prêt a me déplacer et je trouve que ce déplacement va enrichir mon expérience et ceci pour l'intérêt de l'entreprise .

mardi 25 juillet 2017

Michio Kaku’s ‘Future of the Mind’

https://images-na.ssl-images-amazon.com/images/I/51l1sovoHeL._SX327_BO1,204,203,200_.jpg

The book that I read this month is 'The Future of the Mind_ The Scientific Quest to Understand, Enhance, and Empower the Mind-Doubleday' Michio Kaku.

In his book, “Physics of the Future,” Kaku took readers on a whirlwind tour of science fictions he believes are poised to become science realities: space travel and nanotech medical robots. In “The Future of the Mind,” Kaku ushers us to even stranger territory — the science of consciousness. Kaku claims the mysteries of the mind will soon be mysteries no more. It’s an audacious assertion backed up, he says, by a flood of new neuroscience technologies. But behind his buoyant optimism lie questions that threaten the enterprise he describes so skillfully. What does a science of the mind, rather than the brain, look like? Does such a science require reducing the mind to “just neurons,” or are there other paths to understanding the phenomena of consciousness?
For Kaku, the brain is a computer made of meat, and understanding the mind is just a really, really hard engineering problem. The fundamental laws are already known, and Kaku tells us we’ll soon be manipulating the stuff of consciousness with the same acuity we push electrons around in our digital devices. This singular confidence is both strength and weakness as Kaku unspools his narrative, and doubts about his core convictions begin to trail the reader like a parade of ghosts.

Kaku takes us to laboratories where researchers are studying the microscopic dynamics of the brain’s wiring. For example, using functional magnetic resonance imaging (fMRI), which tracks neural activity, researchers have recorded how the brain lights up when shown fragments of a video. Scientists can then determine a subject’s neural response to seeing various things. Comparing this dictionary of neural responses to the observed fMRI patterns in a person viewing a different film, researchers can reconstruct a reasonable facsimile of the film based purely on brain activity. With this kind of technique it may even be possible for scientists to crudely identify what people hooked to fMRI machines are dreaming about.

From these developments, Kaku imagines an era when memories can be recorded and then played back into someone else’s head by stimulating the same pattern of neural activity. Going one step further, machines wired directly to brains will be able to read and transmit our thoughts instantaneously.
Minds made of meat (ours) are just one of Kaku’s concerns. He is also interested in the possibilities of silicon and even alien minds. A compelling chapter on artificial intelligence describes the explosion in robotics and the new research that seeks to broaden the requirements for silicon self-consciousness, including a capacity to feel emotion.

Like the futurist Ray Kurzweil, Kaku believes the most important advances in silicon computing will still serve our needs and not the coming robot overlords (if we do create them). By mapping out the “connectome” — the explicit account of every neural connection in your head — Kaku tells us it should be possible to reverse-engineer each and every person’s brain. Reconstruct this connectome in a computer and you will have downloaded yourself into that machine. In this way the future of the mind, your mind in particular, might last as long as there are computers to run your connectome.

But are you nothing more than the sum of your brain’s connections? Here’s where Kaku stumbles. It’s been almost 20 years since the philosopher David Chalmers introduced the distinction between “easy” and “hard” problems in the study of consciousness. Easy problems, according to Chalmers, were things like figuring out how the brain cycles through signals from the arm allowing you to pick up an object. Researchers developing the next generation of prosthetics will tell you this “easy” problem remains pretty hard, but as Chalmers rightly pointed out, control of the arm is nothing compared with developing a scientific account of the vividness of our own experience. It’s the internal luminosity — the “being” of our being — that constitutes Chalmers’s hard problem and that eludes Kaku’s engineering-­based perspective.

The problem is that we still don’t have much in the way of a working model of consciousness. With a physicist’s eye for economy, Kaku tries to provide one through what he calls a “space-time theory.” It’s a model of consciousness with a graded scale of awareness based on the number of feedback loops between environment and organism. Thus, in Kaku’s view, a thermostat has the lowest possible level of consciousness while humans, with our ability to move through space and project ourselves mentally backward and forward in time, represent the highest level currently known.

I’ve spent most of my professional life running supercomputer simulations of events like the collapsing of interstellar gas clouds to form new stars, and it seems to me that Kaku has taken a metaphor and mistaken it for a mechanism. There has always been the temptation to take the latest technology, like clockworks in the 17th century, and see it as a model for the mechanics of thought. But simulations are not a self, and information is not experience. Kaku acknowledges the existence of the hard problem but waves it away. “There is no such thing as the Hard Problem,” he writes.
Thus the essential mystery of our lives — the strange sense of presence to which we’re bound till death and that lies at the heart of so much poetry, art and music — is dismissed as a non-problem when it’s exactly the problem we can’t ignore. If we’re to have anything like a final theory of consciousness, we had better be attentive to the complexity of how we experience our being.
When Kaku quotes the cognitive scientist Marvin Minsky telling us that “minds are simply what brains do,” he assumes that scientific accounts of consciousness must reduce to discussions of circuitry and programming alone. But there are other options. For those pursuing ideas of “emergence,” descriptions of lower-level structures, like neurons, don’t exhaust nature’s creative potential. There’s also the more radical possibility that some rudimentary form of consciousness must be added to the list of things the world is built of, like mass or electric charge.
On the ethical front, Kaku does an admirable job of at least raising the troubling issues inherent in the technologies he describes, but there’s one critical question he misses entirely. The deployment of new technologies tends to create their own realities and values. If we treat minds like meat-computers, we may end up in a world where that’s the only aspect of their nature we perceive or value.
Keeping these questions in mind, however, only enhances the enjoyment of this wide-ranging book. Kaku thinks with great breadth, and the vistas he presents us are worth the trip even if some of them turn out to be only dreamscapes.

THE FUTURE OF THE MIND

The Scientific Quest to Understand, Enhance, and Empower the Mind
By Michio Kaku


does neuroscience have anything to offer AI?


A review was published this week in Neuron by DeepMind luminary Demis Hassibis and colleagues about Neuroscience-inspired Artificial Intelligence. As one would expect from a journal called Neuron, the article was pretty positive about the use of neurons!
There have been two key concepts from neuroscience that are ubiquitous in the AI field today: Deep Learning and Reinforcement Learning. Both are very direct descendants of research from the neuroscience community. In fact, saying that Deep Learning is an outgrowth of neuroscience obscures the amount of influence neuroscience has had. It did not just gift the idea of connecting of artificial neurons together to build a fictive brain, but much more technical ideas such as convolutional neural networks that use a single function repeatedly across its input as the retina or visual cortex does; hierarchical processing in the way the brain goes from layer to layer; divisive normalization as a way to keep outputs within a reasonable and useful range. Similarly, Reinforcement Learning and all its variants have continued to expand and be developed by the cognitive community.
Sounds great! So what about more recent inspirations? Here, Hassibis &co offer up the roles of attention, episodic memory, working memory, and ‘continual learning’. But reading this, I became less inspired than morose (see this thread). Why? Well look at the example of attention. Attention comes in many forms: automatic, voluntary, bottom-up, top-down, executive, spatial, feature-based, objected-based, and more. It sometimes means a sharpening of the collection of things a neuron responds to, so instead of being active in response to an edge oriented, thisthat, or another way, it only is active when it sees an edge oriented that way. But it sometimes means a narrowing of the area in space that it responds to. Sometimes responses between neurons become more diverse (decorrelated).
But this is not really how ‘attention’ works in deep networks. All of these examples seem primarily motivated by the underlying psychology, not the biological implementation. Which is fine! But does that mean that the biology has nothing to teach us? Even at best, I am not expecting Deep Networks to converge precisely to mammalian-based neural networks, nor that everything the brain does should be useful to AI.
This leads to some normative questions: why hasn’t neuroscience contributed more, especially to Deep Learning? And should we even expect it to?
It could just be that the flow of information from neuroscience to AI  is too weak. It’s not exactly like there’s a great list of “here are all the equations that describe how we think the brain works”. If you wanted to use a more nitty-gritty implementation of attention, where would you turn? Scholarpedia? What if someone wants to move step-by-step through all the ways that visual attention contributes to visual processing? How would they do it? Answer: they would become a neuroscientist. Which doesn’t really help, time-wise. But maybe, slowly over time, these two fields will be more integrated.
More to the point, why even try? AI and neuroscience are two very different fields; one is an engineering discipline of, “how do we get this to work” and the other a scientific discipline of “why does this work”. Who is to say that anything we learn from neuroscience would even be relevant to AI? Animals are bags of meat that have a nervous system trying to solve all sorts of problems (like wiring length energy costs between neurons, physical transmission delays, the need to blood osmolality, etc) that AI has no real interest or need in including but may be fundamental to how the nervous system has evolved. Is the brain the bird to AI’s airplane, accomplishing the same job but engineered in a totally different way?
Then in the middle of writing this, a tweet came through my feed that made me think I had a lot of this wrong (I also realized I had become too fixated on ‘the present’ section of their paper and less on ‘the past’ which is only a few years old anyway).
The ‘best paper’ award at the CVPR 2017 conference went to this paper which connects blocks of layers together, passing forward information from one to the next.

That looks a lot more like what cortex looks like! Though obviously sensory systems in biology are a bit more complicated:
And the advantages? “DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters”

So are the other features of cortex useful in some way? How? How do we have to implement them to make them useful? What are the drawbacks?
Neuroscience is big and unwieldy, spanning a huge number of different fields. But most of these fields are trying to solve exactly the same problem that Deep Learning is trying to solve in very similar ways. This is an incredibly exciting opportunity – a lot of Deep Learning is essentially applied theoretical neuroscience. Which of our hypotheses about why we have attention are true? Which are useless?

vendredi 21 juillet 2017

The limitations of deep learning


Deep learning: the geometric view

The most surprising thing about deep learning is how simple it is. Ten years ago, no one expected that we would achieve such amazing results on machine perception problems by using simple parametric models trained with gradient descent. Now, it turns out that all you need is sufficiently large parametric models trained with gradient descent on sufficiently many examples. As Feynman once said about the universe, "It's not complicated, it's just a lot of it".
In deep learning, everything is a vector, i.e. everything is a point in a geometric space. Model inputs (it could be text, images, etc) and targets are first "vectorized", i.e. turned into some initial input vector space and target vector space. Each layer in a deep learning model operates one simple geometric transformation on the data that goes through it. Together, the chain of layers of the model forms one very complex geometric transformation, broken down into a series of simple ones. This complex transformation attempts to maps the input space to the target space, one point at a time. This transformation is parametrized by the weights of the layers, which are iteratively updated based on how well the model is currently performing. A key characteristic of this geometric transformation is that it must be differentiable, which is required in order for us to be able to learn its parameters via gradient descent. Intuitively, this means that the geometric morphing from inputs to outputs must be smooth and continuous—a significant constraint.
The whole process of applying this complex geometric transformation to the input data can be visualized in 3D by imagining a person trying to uncrumple a paper ball: the crumpled paper ball is the manifold of the input data that the model starts with. Each movement operated by the person on the paper ball is similar to a simple geometric transformation operated by one layer. The full uncrumpling gesture sequence is the complex transformation of the entire model. Deep learning models are mathematical machines for uncrumpling complicated manifolds of high-dimensional data.
That's the magic of deep learning: turning meaning into vectors, into geometric spaces, then incrementally learning complex geometric transformations that map one space to another. All you need are spaces of sufficiently high dimensionality in order to capture the full scope of the relationships found in the original data.

The limitations of deep learning

The space of applications that can be implemented with this simple strategy is nearly infinite. And yet, many more applications are completely out of reach for current deep learning techniques—even given vast amounts of human-annotated data. Say, for instance, that you could assemble a dataset of hundreds of thousands—even millions—of English language descriptions of the features of a software product, as written by a product manager, as well as the corresponding source code developed by a team of engineers to meet these requirements. Even with this data, you could not train a deep learning model to simply read a product description and generate the appropriate codebase. That's just one example among many. In general, anything that requires reasoning—like programming, or applying the scientific method—long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them. Even learning a sorting algorithm with a deep neural network is tremendously difficult.
This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models—for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task, or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex, or there may not be appropriate data available to learn it.
Scaling up current deep learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues. It will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold.

The risk of anthropomorphizing machine learning models

One very real risk with contemporary AI is that of misinterpreting what deep learning models do, and overestimating their abilities. A fundamental feature of the human mind is our "theory of mind", our tendency to project intentions, beliefs and knowledge on the things around us. Drawing a smiley face on a rock suddenly makes it "happy"—in our minds. Applied to deep learning, this means that when we are able to somewhat successfully train a model to generate captions to describe pictures, for instance, we are led to believe that the model "understands" the contents of the pictures, as well as the captions it generates. We then proceed to be very surprised when any slight departure from the sort of images present in the training data causes the model to start generating completely absurd captions.
Failure of a deep learning-based image captioning system.
In particular, this is highlighted by "adversarial examples", which are input samples to a deep learning network that are designed to trick the model into misclassifying them. You are already aware that it is possible to do gradient ascent in input space to generate inputs that maximize the activation of some convnet filter, for instance—this was the basis of the filter visualization technique we introduced in Chapter 5 (Note: of Deep Learning with Python), as well as the Deep Dream algorithm from Chapter 8. Similarly, through gradient ascent, one can slightly modify an image in order to maximize the class prediction for a given class. By taking a picture of a panda and adding to it a "gibbon" gradient, we can get a neural network to classify this panda as a gibbon. This evidences both the brittleness of these models, and the deep difference between the input-to-output mapping that they operate and our own human perception.
An adversarial example: imperceptible changes in an image can upend a model's classification of the image.
In short, deep learning models do not have any understanding of their input, at least not in any human sense. Our own understanding of images, sounds, and language, is grounded in our sensorimotor experience as humans—as embodied earthly creatures. Machine learning models have no access to such experiences and thus cannot "understand" their inputs in any human-relatable way. By annotating large numbers of training examples to feed into our models, we get them to learn a geometric transform that maps data to human concepts on this specific set of examples, but this mapping is just a simplistic sketch of the original model in our minds, the one developed from our experience as embodied agents—it is like a dim image in a mirror.
Current machine learning models: like a dim image in a mirror.
As a machine learning practitioner, always be mindful of this, and never fall into the trap of believing that neural networks understand the task they perform—they don't, at least not in a way that would make sense to us. They were trained on a different, far narrower task than the one we wanted to teach them: that of merely mapping training inputs to training targets, point by point. Show them anything that deviates from their training data, and they will break in the most absurd ways.

Local generalization versus extreme generalization

There just seems to be fundamental differences between the straightforward geometric morphing from input to output that deep learning models do, and the way that humans think and learn. It isn't just the fact that humans learn by themselves from embodied experience instead of being presented with explicit training examples. Aside from the different learning processes, there is a fundamental difference in the nature of the underlying representations.
Humans are capable of far more than mapping immediate stimuli to immediate responses, like a deep net, or maybe an insect, would do. They maintain complex, abstract models of their current situation, of themselves, of other people, and can use these models to anticipate different possible futures and perform long-term planning. They are capable of merging together known concepts to represent something they have never experienced before—like picturing a horse wearing jeans, for instance, or imagining what they would do if they won the lottery. This ability to handle hypotheticals, to expand our mental model space far beyond what we can experience directly, in a word, to perform abstraction and reasoning, is arguably the defining characteristic of human cognition. I call it "extreme generalization": an ability to adapt to novel, never experienced before situations, using very little data or even no new data at all.
This stands in sharp contrast with what deep nets do, which I would call "local generalization": the mapping from inputs to outputs performed by deep nets quickly stops making sense if new inputs differ even slightly from what they saw at training time. Consider, for instance, the problem of learning the appropriate launch parameters to get a rocket to land on the moon. If you were to use a deep net for this task, whether training using supervised learning or reinforcement learning, you would need to feed it with thousands or even millions of launch trials, i.e. you would need to expose it to a dense sampling of the input space, in order to learn a reliable mapping from input space to output space. By contrast, humans can use their power of abstraction to come up with physical models—rocket science—and derive an exact solution that will get the rocket on the moon in just one or few trials. Similarly, if you developed a deep net controlling a human body, and wanted it to learn to safely navigate a city without getting hit by cars, the net would have to die many thousands of times in various situations until it could infer that cars and dangerous, and develop appropriate avoidance behaviors. Dropped into a new city, the net would have to relearn most of what it knows. On the other hand, humans are able to learn safe behaviors without having to die even once—again, thanks to their power of abstract modeling of hypothetical situations.
Local generalization vs. extreme generalization.
In short, despite our progress on machine perception, we are still very far from human-level AI: our models can only perform local generalization, adapting to new situations that must stay very close from past data, while human cognition is capable of extreme generalization, quickly adapting to radically novel situations, or planning very for long-term future situations.

Take-aways

Here's what you should remember: the only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data. Doing this well is a game-changer for essentially every industry, but it is still a very long way from human-level AI.
To lift some of these limitations and start competing with human brains, we need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction. A likely appropriate substrate for abstract modeling of various situations and concepts is that of computer programs. We have said before (Note: in Deep Learning with Python) that machine learning models could be defined as "learnable programs"; currently we can only learn programs that belong to a very narrow and specific subset of all possible programs.

mercredi 12 juillet 2017

How To Become A Better Programmer


Becoming better at nearly any craft, normally involves lots of both studying and practice. A pianist will always both study music and practice musicality. A chess player will both study the game of chess and actively practice as well. For software engineers, the logic remains the same. We should always be both practicing and studying the most effective programming paradigms. So this leads us to the question. "How do we both study, and deliberately practice?" Hopefully this post will help us uncover some of the mysteries involved, as it relates to programming.



If you're reading this post, I'm sure you're already familiar with how to study. So let's ask ourselves, what is deliberate practice? One could define deliberate practice as, actively and accurately, engaging in a said craft by doing. Once a musician studies a specific genre, he/she can then physically practice that area of musicality, and once a chess player becomes familiar with the game of chess, one could then practice it. Once again, the same logic applies to programmers.

I've found one of the best ways to deliberately practice as a programmer is by both studying, and then doing. One could dig a lot deeper inside of both of these areas, but let's keep it more concise for time sake. Consider arrays, case/switch, and the binary search algorithm for example. Without studying or reading over arrays (studying),  it would be hard for a person to know when and where to use its logic. One could easily be creating a ton of extra variables when a simple array would suffice, but if that person never studied arrays first it would be difficult to even know where to start in regards to arrays. If that same person only used if statements and never studied case/switch, he/she could potentially run into a situation that strongly suggest case/switch, versus 150 back to back if statements. Or, if one never learned to use the binary search algorithms, he/she could potentially find a situation that requires searching huge numerical data, and only iterating over one number at a time until the number is found. While the former of these examples could work, it lacks efficiency in most cases.

Let's also be careful on the advice that we receive while asking, "how do I become a better programmer". Everyone's situation is different and what works for someone else may not work for you given your situation. The response I've heard people give most of the time was, "to get better a programming, just program". This could not be any wronger. I would not tell a pianist, just play the piano, or tell chess player to just play chess. I also would not tell a boxer to just get in the ring and box. In the boxer's case, that bad advice would only get him hurt. He needs to learn and practice his jab, his right and left hooks, his uppercuts, stamina, etc... He can't just get in the ring and box without first learning and actively practicing boxing, outside of a direct competing event. Could one not "just program", and do an ill efficient job at it? Of course he/she can. Our earlier examples about searching one number at a time, would be the perfect case. If that person asked how to get better at programming, and we replied, just program, how would this help him in any way? This would probably not help much.

My more specific advice on this matter would be to first ask that person, what it is that he/she specifically do or want to do as a programmer, and then cater my answer towards that person. My answer in regards to the foregoing logical context of this post, would be to study by reading well coveted software engineering books, and doing every quiz/project, and involvement, until you have a solid understanding of the programming methodologies taught in that book. I would then say to apply that logic to whatever study method they are using, books, online tutorials, videos, etc... Doing this would allow them to first study (learn it), and then do it, by practicing it with the practice questions. Afterwards, one could then practice a lot further, through building personal projects. Some might prefer to build personal projects first, It's more of a matter of preference.