Precose Diabetes Type 2 Treatment - Precose Patient Information

Brand names: Precose
Generic name: Acarbose

Precose, acarbose, full prescribing information

What is Precose and why is Precose prescribed?

Precose is an oral medication used to treat type 2 (noninsulin-dependent) diabetes when high blood sugar levels cannot be controlled by diet alone. Precose works by slowing the body's digestion of carbohydrates so that blood sugar levels won't surge upward after a meal. Precose may be taken alone or in combination with certain other diabetes medications.

Most important fact about Precose

Always remember that Precose is an aid to, not a substitute for, good diet and exercise. Failure to follow the diet and exercise plan recommended by your doctor can lead to serious complications such as dangerously high or low blood sugar levels. If you are overweight, losing pounds and exercising are critically important in controlling your diabetes. Remember, too, that Precose is not an oral form of insulin and cannot be used in place of insulin.

How should you take Precose?

Do not take more or less of Precose than directed by your doctor. Precose is usually taken 3 times a day with the first bite of each main meal.

  • If you miss a dose...
    Take it as soon as you remember. If it is almost time for your next dose, skip the one you missed and go back to your regular schedule. Never take 2 doses at the same time. Taking Precose with your 3 main meals will help you to remember your medication schedule.
  • Storage instructions...
    Keep the container tightly closed. Protect from temperatures above 77°F. Store away from moisture.

What side effects may occur?

Side effects cannot be anticipated. If any develop or change in intensity, tell your doctor as soon as possible. Only your doctor can determine if it is safe for you to continue taking Precose.

If side effects do occur, they usually appear during the first few weeks of therapy and generally become less intense and less frequent over time. They are rarely severe.

    • More common side effects may include:
      Abdominal pain, diarrhea, gas

 


Why should Precose not be prescribed?

Do not take Precose when suffering diabetic ketoacidosis (a life-threatening medical emergency caused by insufficient insulin and marked by mental confusion, excessive thirst, nausea, vomiting, headache, fatigue, and a sweet fruity smell to the breath).

You should not take Precose if you have cirrhosis (chronic degenerative liver disease). Also avoid Precose therapy if you have inflammatory bowel disease, ulcers in the colon, any intestinal obstruction or chronic intestinal disease associated with digestion, or any condition that could become worse as a result of gas in the intestine.

Special warnings about Precose

Every 3 months during your first year of treatment, your doctor will give you a blood test to check your liver and see how it is reacting to Precose. While you are taking Precose, you should check your blood and urine periodically for the presence of abnormal sugar (glucose) levels.

Even people with well-controlled diabetes may find that stress such as injury, infection, surgery, or fever results in a loss of control over their blood sugar. If this happens to you, your doctor may recommend that Precose be discontinued temporarily and injected insulin used instead.

When taken alone, Precose does not cause hypoglycemia (low blood sugar), but when you take it in combination with other medications such as Diabinese or Glucotrol, or with insulin, your blood sugar may fall too low. If you have any questions about combining Precose with other medications, be sure to discuss them with your doctor.

If you are taking Precose along with other diabetes medications, be sure to have some source of glucose available in case you experience any symptoms of mild or moderate low blood sugar. (Table sugar won't work because Precose inhibits its absorption.)

  • Symptoms of mild hypoglycemia may include:
    Cold sweat, fast heartbeat, fatigue, headache, nausea, and nervousness
  • Symptoms of more severe hypoglycemia may include:
    Coma, pale skin, and shallow breathing

Severe hypoglycemia is an emergency. Contact your doctor immediately if the symptoms occur.

Possible food and drug interactions when taking Precose

When you take Precose with certain other drugs, the effects of either could be increased, decreased, or altered. It is especially important to check with your doctor before taking Precose with the following:

  • Airway-opening drugs
  • Calcium channel blockers (heart and blood pressure medications)
  • Charcoal tablets
  • Digestive enzyme preparations
  • Digoxin
  • Estrogens
  • Isoniazid
  • Major tranquilizers
  • Nicotinic acid
  • Oral contraceptives
  • Phenytoin
  • Steroid medications
  • Thyroid medications
  • Water pills (diuretics)

Special information if you are pregnant or breastfeeding

The effects of Precose during pregnancy have not been adequately studied. If you are pregnant or plan to become pregnant, tell your doctor immediately. Since studies suggest the importance of maintaining normal blood sugar levels during pregnancy, your doctor may prescribe injected insulin. It is not known whether Precose appears in breast milk. Because many drugs do appear in breast milk, you should not take Precose while breastfeeding.

Recommended dosage for Precose

ADULTS

The recommended starting dose of Precose is 25 milligrams (half of a 50-milligram tablet) 3 times a day, taken with the first bite of each main meal. Some people need to work up to this dose gradually and start with 25 milligrams only once a day. Your doctor will adjust your dosage at 4- to 8-week intervals, based on blood tests and your individual response to Precose. The doctor may increase the medication to 50 milligrams 3 times a day or, if needed, 100 milligrams 3 times a day. You should not take more than this amount. If you weigh less than 132 pounds, the maximum dosage is 50 milligrams 3 times a day.

Precose Tablets

If you are also taking another oral antidiabetic medication or insulin and you show signs of low blood sugar, your doctor will adjust the dosage of both medications.

CHILDREN

Safety and effectiveness of Precose in children have not been established.

Overdosage

An overdose of Precose alone will not cause low blood sugar. However, it may cause a temporary increase in gas, diarrhea, and abdominal discomfort. These symptoms usually disappear quickly. However, in the event of an overdose, do not take any carbohydrate drinks or meals until the symptoms have passed.

last updated 01/2008

Precose, acarbose, full prescribing information

Detailed Info on Signs, Symptoms, Causes, Treatments of Diabetes

back to: Browse all Medications for Diabetes

APA Reference
Staff, H. (2008, January 31). Precose Diabetes Type 2 Treatment - Precose Patient Information, HealthyPlace. Retrieved on 2024, October 2 from https://www.healthyplace.com/diabetes/medications/precose-type-2-diabetes-treatment

Last Updated: March 10, 2016

Levemir Diabetes Treatment - Levemir Patient Information

Brand Name: Levemir
Generic Name: Insulin Detemir

Pronounced: IN-su-lin-DE-te-mir

Levemir, insulin detemir, full prescribing information 

What is Levemir and what is it used for?

Levemir is a man-made form of a hormone that is produced in the body. It works by lowering levels of glucose (sugar) in the blood. It is a long-acting form of insulin that is slightly different from other forms of insulin that are not man-made.

Levemir is used to treat diabetes in adults and children.

Levemir may also be used for other purposes not listed in this medication guide.

Important information about Levemir

Many other drugs can potentially interfere with the effects of Levemir. It is extremely important that you tell your doctor about all the prescription and over-the-counter medications you use. This includes vitamins, minerals, herbal products, and drugs prescribed by other doctors. Do not start using a new medication without telling your doctor.

Levemir is only part of a complete program of treatment that may also include diet, exercise, weight control, foot care, eye care, dental care, overall proper health care, and testing your blood sugar. Follow your diet, medication, and exercise routines very closely. Changing any of these factors can affect your blood sugar levels.

Take care to keep your blood sugar from getting too low, causing hypoglycemia. Symptoms of low blood sugar may include headache, nausea, hunger, confusion, drowsiness, weakness, dizziness, blurred vision, fast heartbeat, sweating, tremor, or trouble concentrating. Carry a piece of non-dietetic hard candy or glucose tablets with you in case you have low blood sugar. Also be sure your family and close friends know how to help you in an emergency.

Also watch for signs of blood sugar that is too high (hyperglycemia). These symptoms include increased thirst, loss of appetite, fruity breath odor, increased urination, nausea, vomiting, drowsiness, dry skin, and dry mouth. Check your blood sugar levels and ask your doctor how to adjust your insulin doses if needed.

Never share an injection pen or cartridge with another person. Sharing injection pens or cartridges can allow disease such as hepatitis or HIV to pass from one person to another.


continue story below


Before using Levemir

You should not use Levemir if you are allergic to insulin, or if you are having an episode of hypoglycemia (low blood sugar). Before using Levemir, tell your doctor if you have kidney or liver disease, or any disorder of your thyroid, adrenal, or pituitary glands.

Tell your doctor about all other medications you use, including any oral (taken by mouth) diabetes medications.

Levemir is only part of a complete program of treatment that may also include diet, exercise, weight control, foot care, eye care, dental care, and testing your blood sugar. Follow your diet, medication, and exercise routines very closely. Changing any of these factors can affect your blood sugar levels.

Your doctor will need to check your progress on a regular basis. Do not miss any scheduled appointments.

FDA pregnancy category C. It is not known whether Levemir is harmful to an unborn baby. Before using Levemir, tell your doctor if you are pregnant or plan to become pregnant during treatment It is not known whether insulin detemir passes into breast milk or if it could harm a nursing baby. Do not use Levemir without telling your doctor if you are breast-feeding a baby.

How should I use Levemir?

Use Levemir exactly as it was prescribed for you. Do not use it in larger amounts or for longer than recommended by your doctor. Follow the directions on your prescription label.

Do not mix or dilute Levemir with any other insulin, or use it with an insulin pump.

Levemir is given as an injection (shot) under your skin. Your doctor, nurse, or pharmacist will give you specific instructions on how and where to inject Levemir. Do not self-inject this medicine if you do not fully understand how to give the injection and properly dispose of used needles and syringes.

If you use Levemir once daily, use the injection at your evening meal or at bedtime. If you use Levemir twice daily, use your evening dose at least 12 hours after your morning dose.

Levemir should be thin, clear, and colorless. Do not use the medication if it looks cloudy, has changed colors, or has any particles in it. Call your doctor for a new prescription.

Choose a different place in your injection skin area each time you use Levemir. Do not inject into the same place two times in a row.

If you use an injection pen, attach a new needle to the pen each time you use it. Throw away only the needle in a puncture-proof container. You may continue using the pen for up to 42 days.

Needles may not be included with the injection pen. Ask your doctor or pharmacist which brand and type of needle to use with the pen.

Use each disposable needle only one time. Throw away used needles in a puncture-proof container. If your Levemir does not come with such a container, ask your pharmacist where you can get one. Keep this container out of the reach of children and pets. Your pharmacist can tell you how to properly dispose of the container.

Some insulin needles can be used more than once, depending on needle brand and type. But a reused needle must be properly cleaned, recapped, and inspected for bending or breakage. Reusing needles also increases your risk of infection. Ask your doctor or pharmacist whether you are able to reuse your insulin needles.

Never share an injection pen or cartridge with another person. Sharing injection pens or cartridges can allow disease such as hepatitis or HIV to pass from one person to another.

Check your blood sugar carefully during a time of stress or illness, if you travel, exercise more than usual, or skip meals. These things can affect your glucose levels and your insulin dose needs may also change.

Watch for signs of blood sugar that is too high (hyperglycemia).

These symptoms include increased thirst, loss of appetite, fruity breath odor, increased urination, nausea, vomiting, drowsiness, dry skin, and dry mouth. Check your blood sugar levels and ask your doctor how to adjust your insulin doses if needed.

Ask your doctor how to adjust your Levemir dose if needed. Do not change your dose without first talking to your doctor. Carry an ID card or wear a medical alert bracelet stating that you have diabetes, in case of emergency. Any doctor, dentist, or emergency medical care provider who treats you should know that you are diabetic.

Storing unopened vials, cartridges, or injection pens: Keep in the carton and store in a refrigerator, protected from light. Throw away any insulin not used before the expiration date on the medicine label. Unopened vials, cartridges, or injection pens may also be stored at room temperature for up to 42 days, away from heat and bright light. Throw away any insulin not used within 42 days. Storing after your first use: Keep the "in-use" vials, cartridges, or injection pens at room temperature and use within 42 days. Do not refrigerate.

Do not freeze Levemir, and throw away the medication if it has become frozen.

What happens if I miss a dose?

Follow your doctor's directions if you miss a dose of insulin.

It is important to keep Levemir on hand at all times. Get your prescription refilled before you run out of medicine completely.

What happens if I overdose?

Seek emergency medical attention if you think you have used too much of this medicine. An insulin overdose can cause life-threatening hypoglycemia.

Symptoms of severe hypoglycemia include extreme weakness, blurred vision, sweating, trouble speaking, tremors, stomach pain, confusion, seizure (convulsions), or coma.

What should I avoid while using Levemir?

Do not change the brand of insulin detemir or syringe you are using without first talking to your doctor or pharmacist. Avoid drinking alcohol. Your blood sugar may become dangerously low if you drink alcohol while using Levemir.

Levemir side effects

Get emergency medical help if you have any of these signs of insulin allergy: itching skin rash over the entire body, wheezing, trouble breathing, fast heart rate, sweating, or feeling like you might pass out.

Call your doctor if you have a serious side effect such as:

  • swelling in your hands or feet; or
  • low potassium (confusion, uneven heart rate, extreme thirst, increased urination, leg discomfort, muscle weakness or limp feeling).

Hypoglycemia, or low blood sugar, is the most common side effect of Levemir. Symptoms of low blood sugar may include headache, nausea, hunger, confusion, drowsiness, weakness, dizziness, blurred vision, fast heartbeat, sweating, tremor, trouble concentrating, confusion, or seizure (convulsions). Watch for signs of low blood sugar. Carry a piece of non-dietetic hard candy or glucose tablets with you in case you have low blood sugar.

Tell your doctor if you have itching, swelling, redness, or thickening of the skin where you inject Levemir.

This is not a complete list of side effects and others may occur. Call your doctor for medical advice about side effects. You may report side effects to FDA at 1-800-FDA-1088.

What other drugs will affect Levemir?

Using certain medicines can make it harder for you to tell when you have low blood sugar. Tell your doctor if you use any of the following:

  • albuterol (Proventil, Ventolin);
  • clonidine (Catapres);
  • reserpine;
  • guanethidine (Ismelin); or
  • beta-blockers such as atenolol (Tenormin), bisoprolol (Zebeta), labetalol (Normodyne, Trandate), metoprolol (Lopressor, Toprol), nadolol (Corgard), propranolol(Inderal, InnoPran), timolol (Blocadren), and others.

There are many other medicines that can increase or decrease the effects of Levemir on lowering your blood sugar. Tell your doctor about all the prescription and over-the-counter medications you use. This includes vitamins, minerals, herbal products, and drugs prescribed by other doctors. Do not start using a new medication without telling your doctor. Keep a list with you of all the medicines you use and show this list to any doctor or other healthcare provider who treats you.

Where can I get more information?

  • Your pharmacist can provide more information about Levemir.
  • Remember, keep this and all other medicines out of the reach of children, never share your medicines with others, and use Levemir only for the indication prescribed.

Last Updated 01/2008

Levemir, insulin detemir, full prescribing information

Detailed Info on Signs, Symptoms, Causes, Treatments of Diabetes

back to: Browse all Medications for Diabetes

APA Reference
Staff, H. (2008, January 31). Levemir Diabetes Treatment - Levemir Patient Information, HealthyPlace. Retrieved on 2024, October 2 from https://www.healthyplace.com/diabetes/medications/levemir-insulin-detemir-indications

Last Updated: July 21, 2014

On Achievement

If a comatose person were to earn an interest of 1 million USD annually on the sum paid to him as compensatory damages - would this be considered an achievement of his? To succeed to earn 1 million USD is universally judged to be an achievement. But to do so while comatose will almost as universally not be counted as one. It would seem that a person has to be both conscious and intelligent to have his achievements qualify.

Even these conditions, though necessary, are not sufficient. If a totally conscious (and reasonably intelligent) person were to accidentally unearth a treasure trove and thus be transformed into a multi-billionaire - his stumbling across a fortune will not qualify as an achievement. A lucky turn of events does not an achievement make. A person must be intent on achieving to have his deeds classified as achievements. Intention is a paramount criterion in the classification of events and actions, as any intensionalist philosopher will tell you.

Supposing a conscious and intelligent person has the intention to achieve a goal. He then engages in a series of absolutely random and unrelated actions, one of which yields the desired result. Will we then say that our person is an achiever?

Not at all. It is not enough to intend. One must proceed to produce a plan of action, which is directly derived from the overriding goal. Such a plan of action must be seen to be reasonable and pragmatic and leading - with great probability - to the achievement. In other words: the plan must involve a prognosis, a prediction, a forecast, which can be either verified or falsified. Attaining an achievement involves the construction of an ad-hoc mini theory. Reality has to be thoroughly surveyed, models constructed, one of them selected (on empirical or aesthetic grounds), a goal formulated, an experiment performed and a negative (failure) or positive (achievement) result obtained. Only if the prediction turns out to be correct can we speak of an achievement.

Our would-be achiever is thus burdened by a series of requirements. He must be conscious, must possess a well-formulated intention, must plan his steps towards the attainment of his goal, and must correctly predict the results of his actions.

But planning alone is not sufficient. One must carry out one's plan of action (from mere plan to actual action). An effort has to be seen to be invested (which must be commensurate with the achievement sought and with the qualities of the achiever). If a person consciously intends to obtain a university degree and constructs a plan of action, which involves bribing the professors into conferring one upon him - this will not be considered an achievement. To qualify as an achievement, a university degree entails a continuous and strenuous effort. Such an effort is commensurate with the desired result. If the person involved is gifted - less effort will be expected of him. The expected effort is modified to reflect the superior qualities of the achiever. Still, an effort, which is deemed to be inordinately or irregularly small (or big!) will annul the standing of the action as an achievement. Moreover, the effort invested must be seen to be continuous, part of an unbroken pattern, bounded and guided by a clearly defined, transparent plan of action and by a declared intention. Otherwise, the effort will be judged to be random, devoid of meaning, haphazard, arbitrary, capricious, etc. - which will erode the achievement status of the results of the actions. This, really, is the crux of the matter: the results are much less important than the coherent, directional, patterns of action. It is the pursuit that matters, the hunt more than the game and the game more than victory or gains. Serendipity cannot underlie an achievement.

These are the internal-epistemological-cognitive determinants as they are translated into action. But whether an event or action is an achievement or not also depends on the world itself, the substrate of the actions.

An achievement must bring about change. Changes occur or are reported to have occurred - as in the acquisition of knowledge or in mental therapy where we have no direct observational access to the events and we have to rely on testimonials. If they do not occur (or are not reported to have occurred) - there would be no meaning to the word achievement. In an entropic, stagnant world - no achievement is ever possible. Moreover: the mere occurrence of change is grossly inadequate. The change must be irreversible or, at least, induce irreversibility, or have irreversible effects. Consider Sisyphus: forever changing his environment (rolling that stone up the mountain slope). He is conscious, is possessed of intention, plans his actions and diligently and consistently carries them out. He is always successful at achieving his goals. Yet, his achievements are reversed by the spiteful gods. He is doomed to forever repeat his actions, thus rendering them meaningless. Meaning is linked to irreversible change, without it, it is not to be found. Sisyphean acts are meaningless and Sisyphus has no achievements to talk about.

Irreversibility is linked not only to meaning, but also to free will and to the lack of coercion or oppression. Sisyphus is not his own master. He is ruled by others. They have the power to reverse the results of his actions and, thus, to annul them altogether. If the fruits of our labour are at the mercy of others - we can never guarantee their irreversibility and, therefore, can never be sure to achieve anything. If we have no free will - we can have no real plans and intentions and if our actions are determined elsewhere - their results are not ours and nothing like achievement exists but in the form of self delusion.

We see that to amply judge the status of our actions and of their results, we must be aware of many incidental things. The context is critical: what were the circumstances, what could have been expected, what are the measures of planning and of intention, of effort and of perseverance which would have "normally" been called for, etc. Labelling a complex of actions and results "an achievement" requires social judgement and social recognition. Take breathing: no one considers this to be an achievement unless Stephen Hawking is involved. Society judges the fact that Hawking is still (mentally and sexually) alert to be an outstanding achievement. The sentence: "an invalid is breathing" would be categorized as an achievement only by informed members of a community and subject to the rules and the ethos of said community. It has no "objective" or ontological weight.

Events and actions are classified as achievements, in other words, as a result of value judgements within given historical, psychological and cultural contexts. Judgement has to be involved: are the actions and their results negative or positive in the said contexts. Genocide, for instance, would have not qualified as an achievement in the USA - but it would have in the ranks of the SS. Perhaps to find a definition of achievement which is independent of social context would be the first achievement to be considered as such anywhere, anytime, by everyone.


 

next: Traumas as Social Interactions

APA Reference
Vaknin, S. (2008, January 13). On Achievement, HealthyPlace. Retrieved on 2024, October 2 from https://www.healthyplace.com/personality-disorders/malignant-self-love/on-achievement

Last Updated: July 4, 2018

The Murder of Oneself

Those who believe in the finality of death (i.e., that there is no after-life) - they are the ones who advocate suicide and regard it as a matter of personal choice. On the other hand, those who firmly believe in some form of existence after corporeal death - they condemn suicide and judge it to be a major sin. Yet, rationally, the situation should have been reversed: it should have been easier for someone who believed in continuity after death to terminate this phase of existence on the way to the next. Those who faced void, finality, non-existence, vanishing - should have been greatly deterred by it and should have refrained even from entertaining the idea. Either the latter do not really believe what they profess to believe - or something is wrong with rationality. One would tend to suspect the former.

Suicide is very different from self sacrifice, avoidable martyrdom, engaging in life risking activities, refusal to prolong one's life through medical treatment, euthanasia, overdosing and self inflicted death that is the result of coercion. What is common to all these is the operational mode: a death caused by one's own actions. In all these behaviours, a foreknowledge of the risk of death is present coupled with its acceptance. But all else is so different that they cannot be regarded as belonging to the same class. Suicide is chiefly intended to terminate a life - the other acts are aimed at perpetuating, strengthening and defending values.

Those who commit suicide do so because they firmly believe in the finiteness of life and in the finality of death. They prefer termination to continuation. Yet, all the others, the observers of this phenomenon, are horrified by this preference. They abhor it. This has to do with out understanding of the meaning of life.

Ultimately, life has only meanings that we attribute and ascribe to it. Such a meaning can be external (God's plan) or internal (meaning generated through arbitrary selection of a frame of reference). But, in any case, it must be actively selected, adopted and espoused. The difference is that, in the case of external meanings, we have no way to judge their validity and quality (is God's plan for us a good one or not?). We just "take them on" because they are big, all encompassing and of a good "source". A hyper-goal generated by a superstructural plan tends to lend meaning to our transient goals and structures by endowing them with the gift of eternity. Something eternal is always judged more meaningful than something temporal. If a thing of less or no value acquires value by becoming part of a thing eternal - than the meaning and value reside with the quality of being eternal - not with the thing thus endowed. It is not a question of success. Plans temporal are as successfully implemented as designs eternal. Actually, there is no meaning to the question: is this eternal plan / process / design successful because success is a temporal thing, linked to endeavours that have clear beginnings and ends.

This, therefore, is the first requirement: our life can become meaningful only by integrating into a thing, a process, a being eternal. In other words, continuity (the temporal image of eternity, to paraphrase a great philosopher) is of the essence. Terminating our life at will renders them meaningless. A natural termination of our life is naturally preordained. A natural death is part and parcel of the very eternal process, thing or being which lends meaning to life. To die naturally is to become part of an eternity, a cycle, which goes on forever of life, death and renewal. This cyclic view of life and the creation is inevitable within any thought system, which incorporates a notion of eternity. Because everything is possible given an eternal amount of time - so are resurrection and reincarnation, the afterlife, hell and other beliefs adhered to by the eternal lot.

Sidgwick raised the second requirement and with certain modifications by other philosophers, it reads: to begin to appreciate values and meanings, a consciousness (intelligence) must exist. True, the value or meaning must reside in or pertain to a thing outside the consciousness / intelligence. But, even then, only conscious, intelligent people will be able to appreciate it.

We can fuse the two views: the meaning of life is the consequence of their being part of some eternal goal, plan, process, thing, or being. Whether this holds true or does not - a consciousness is called for in order to appreciate life's meaning. Life is meaningless in the absence of consciousness or intelligence. Suicide flies in the face of both requirements: it is a clear and present demonstration of the transience of life (the negation of the NATURAL eternal cycles or processes). It also eliminates the consciousness and intelligence that could have judged life to have been meaningful had it survived. Actually, this very consciousness / intelligence decides, in the case of suicide, that life has no meaning whatsoever. To a very large extent, the meaning of life is perceived to be a collective matter of conformity. Suicide is a statement, writ in blood, that the community is wrong, that life is meaningless and final (otherwise, the suicide would not have been committed).

This is where life ends and social judgement commences. Society cannot admit that it is against freedom of expression (suicide is, after all, a statement). It never could. It always preferred to cast the suicides in the role of criminals (and, therefore, bereft of any or many civil rights). According to still prevailing views, the suicide violates unwritten contracts with himself, with others (society) and, many might add, with God (or with Nature with a capital N). Thomas Aquinas said that suicide was not only unnatural (organisms strive to survive, not to self annihilate) - but it also adversely affects the community and violates God's property rights. The latter argument is interesting: God is supposed to own the soul and it is a gift (in Jewish writings, a deposit) to the individual. A suicide, therefore, has to do with the abuse or misuse of God's possessions, temporarily lodged in a corporeal mansion.


 


This implies that suicide affects the eternal, immutable soul. Aquinas refrains from elaborating exactly how a distinctly physical and material act alters the structure and / or the properties of something as ethereal as the soul. Hundreds of years later, Blackstone, the codifier of British Law, concurred. The state, according to this juridical mind, has a right to prevent and to punish for suicide and for attempted suicide. Suicide is self-murder, he wrote, and, therefore, a grave felony. In certain countries, this still is the case. In Israel, for instance, a soldier is considered to be "army property" and any attempted suicide is severely punished as being "attempt at corrupting army possessions". Indeed, this is paternalism at its worst, the kind that objectifies its subjects. People are treated as possessions in this malignant mutation of benevolence. Such paternalism acts against adults expressing fully informed consent. It is an explicit threat to autonomy, freedom and privacy. Rational, fully competent adults should be spared this form of state intervention. It served as a magnificent tool for the suppression of dissidence in places like Soviet Russia and Nazi Germany. Mostly, it tends to breed "victimless crimes". Gamblers, homosexuals, communists, suicides - the list is long. All have been "protected from themselves" by Big Brothers in disguise. Wherever humans possess a right - there is a correlative obligation not to act in a way that will prevent the exercise of such right, whether actively (preventing it), or passively (reporting it). In many cases, not only is suicide consented to by a competent adult (in full possession of his faculties) - it also increases utility both for the individual involved and for society. The only exception is, of course, where minors or incompetent adults (the mentally retarded, the mentally insane, etc.) are involved. Then a paternalistic obligation seems to exist. I use the cautious term "seems" because life is such a basic and deep set phenomenon that even the incompetents can fully gauge its significance and make "informed" decisions, in my view. In any case, no one is better able to evaluate the quality of life (and the ensuing justifications of a suicide) of a mentally incompetent person - than that person himself.

The paternalists claim that no competent adult will ever decide to commit suicide. No one in "his right mind" will elect this option. This contention is, of course, obliterated both by history and by psychology. But a derivative argument seems to be more forceful. Some people whose suicides were prevented felt very happy that they were. They felt elated to have the gift of life back. Isn't this sufficient a reason to intervene? Absolutely, not. All of us are engaged in making irreversible decisions. For some of these decisions, we are likely to pay very dearly. Is this a reason to stop us from making them? Should the state be allowed to prevent a couple from marrying because of genetic incompatibility? Should an overpopulated country institute forced abortions? Should smoking be banned for the higher risk groups? The answers seem to be clear and negative. There is a double moral standard when it comes to suicide. People are permitted to destroy their lives only in certain prescribed ways.

And if the very notion of suicide is immoral, even criminal - why stop at individuals? Why not apply the same prohibition to political organizations (such as the Yugoslav Federation or the USSR or East Germany or Czechoslovakia, to mention four recent examples)? To groups of people? To institutions, corporations, funds, not for profit organizations, international organizations and so on? This fast deteriorates to the land of absurdities, long inhabited by the opponents of suicide.

 


 

next: On Achievement

APA Reference
Vaknin, S. (2008, January 13). The Murder of Oneself, HealthyPlace. Retrieved on 2024, October 2 from https://www.healthyplace.com/personality-disorders/malignant-self-love/murder-of-oneself

Last Updated: July 4, 2018

The Manifold of Sense

"Anthropologists report enormous differences in the ways that different cultures categorize emotions. Some languages, in fact, do not even have a word for emotion. Other languages differ in the number of words they have to name emotions. While English has over 2,000 words to describe emotional categories, there are only 750 such descriptive words in Taiwanese Chinese. One tribal language has only 7 words that could be translated into categories of emotion... the words used to name or describe an emotion can influence what emotion is experienced. For example, Tahitians do not have a word directly equivalent to sadness. Instead, they treat sadness as something like a physical illness. This difference has an impact on how the emotion is experienced by Tahitians. For example, the sadness we feel over the departure of a close friend would be experienced by a Tahitian as exhaustion. Some cultures lack words for anxiety or depression or guilt. Samoans have one word encompassing love, sympathy, pity, and liking - which are very different emotions in our own culture."

"Psychology - An Introduction" Ninth Edition By: Charles G. Morris, University of Michigan Prentice Hall, 1996

Introduction

This essay is divided in two parts. In the first, we survey the landscape of the discourse regarding emotions in general and sensations in particular. This part will be familiar to any student of philosophy and can be skipped by same. The second part contains an attempt at producing an integrative overview of the matter, whether successful or not is best left to the reader to judge.

A. Survey

Words have the power to express the speaker's emotions and to evoke emotions (whether the same or not remains disputed) in the listener. Words, therefore, possess emotive meaning together with their descriptive meaning (the latter plays a cognitive role in forming beliefs and understanding).

Our moral judgements and the responses deriving thereof have a strong emotional streak, an emotional aspect and an emotive element. Whether the emotive part predominates as the basis of appraisal is again debatable. Reason analyzes a situation and prescribes alternatives for action. But it is considered to be static, inert, not goal-oriented (one is almost tempted to say: non-teleological). The equally necessary dynamic, action-inducing component is thought, for some oblivious reason, to belong to the emotional realm. Thus, the language (=words) used to express moral judgement supposedly actually express the speaker's emotions. Through the aforementioned mechanism of emotive meaning, similar emotions are evoked in the hearer and he is moved to action.

A distinction should be - and has been - drawn between regarding moral judgement as merely a report pertaining to the subject's inner emotional world - and regarding it wholly as an emotive reaction. In the first case, the whole notion (really, the phenomenon) of moral disagreement is rendered incomprehensible. How could one disagree with a report? In the second case, moral judgement is reduced to the status of an exclamation, a non-propositional expression of "emotive tension", a mental excretion. This absurd was nicknamed: "The Boo-Hoorah Theory".

There were those who maintained that the whole issue was the result of mislabeling. Emotions are really what we otherwise call attitudes, they claimed. We approve or disapprove of something, therefore, we "feel". Prescriptivist accounts displaced emotivist analyses. This instrumentalism did not prove more helpful than its purist predecessors.

Throughout this scholarly debate, philosophers did what they are best at: ignored reality. Moral judgements - every child knows - are not explosive or implosive events, with shattered and scattered emotions strewn all over the battlefield. Logic is definitely involved and so are responses to already analyzed moral properties and circumstances. Moreover, emotions themselves are judged morally (as right or wrong). If a moral judgement were really an emotion, we would need to stipulate the existence of an hyper-emotion to account for the moral judgement of our emotions and, in all likelihood, will find ourselves infinitely regressing. If moral judgement is a report or an exclamation, how are we able to distinguish it from mere rhetoric? How are we able to intelligibly account for the formation of moral standpoints by moral agents in response to an unprecedented moral challenge?

Moral realists criticize these largely superfluous and artificial dichotomies (reason versus feeling, belief versus desire, emotivism and noncognitivism versus realism).

The debate has old roots. Feeling Theories, such as Descartes', regarded emotions as a mental item, which requires no definition or classification. One could not fail to fully grasp it upon having it. This entailed the introduction of introspection as the only way to access our feelings. Introspection not in the limited sense of "awareness of one's mental states" but in the broader sense of "being able to internally ascertain mental states". It almost became material: a "mental eye", a "brain-scan", at the least a kind of perception. Others denied its similarity to sensual perception. They preferred to treat introspection as a modus of memory, recollection through retrospection, as an internal way of ascertaining (past) mental events. This approach relied on the impossibility of having a thought simultaneously with another thought whose subject was the first thought. All these lexicographic storms did not serve either to elucidate the complex issue of introspection or to solve the critical questions: How can we be sure that what we "introspect" is not false? If accessible only to introspection, how do we learn to speak of emotions uniformly? How do we (unreflectively) assume knowledge of other people's emotions? How come we are sometimes forced to "unearth" or deduce our own emotions? How is it possible to mistake our emotions (to have one without actually feeling it)? Are all these failures of the machinery of introspection?


 


The proto-psychologists James and Lange have (separately) proposed that emotions are the experiencing of physical responses to external stimuli. They are mental representations of totally corporeal reactions. Sadness is what we call the feeling of crying. This was phenomenological materialism at its worst. To have full-blown emotions (not merely detached observations), one needed to experience palpable bodily symptoms. The James-Lange Theory apparently did not believe that a quadriplegic can have emotions, since he definitely experiences no bodily sensations. Sensationalism, another form of fanatic empiricism, stated that all our knowledge derived from sensations or sense data. There is no clear answer to the question how do these sensa (=sense data) get coupled with interpretations or judgements. Kant postulated the existence of a "manifold of sense" - the data supplied to the mind through sensation. In the "Critique of Pure Reason" he claimed that these data were presented to the mind in accordance with its already preconceived forms (sensibilities, like space and time). But to experience means to unify these data, to cohere them somehow. Even Kant admitted that this is brought about by the synthetic activity of "imagination", as guided by "understanding". Not only was this a deviation from materialism (what material is "imagination" made of?) - it was also not very instructive.

The problem was partly a problem of communication. Emotions are qualia, qualities as they appear to our consciousness. In many respects they are like sense data (which brought about the aforementioned confusion). But, as opposed to sensa, which are particular, qualia are universal. They are subjective qualities of our conscious experience. It is impossible to ascertain or to analyze the subjective components of phenomena in physical, objective terms, communicable and understandable by all rational individuals, independent of their sensory equipment. The subjective dimension is comprehensible only to conscious beings of a certain type (=with the right sensory faculties). The problems of "absent qualia" (can a zombie/a machine pass for a human being despite the fact that it has no experiences) and of "inverted qualia" (what we both call "red" might have been called "green" by you if you had my internal experience when seeing what we call "red") - are irrelevant to this more limited discussion. These problems belong to the realm of "private language". Wittgenstein demonstrated that a language cannot contain elements which it would be logically impossible for anyone but its speaker to learn or understand. Therefore, it cannot have elements (words) whose meaning is the result of representing objects accessible only to the speaker (for instance, his emotions). One can use a language either correctly or incorrectly. The speaker must have at his disposal a decision procedure, which will allow him to decide whether his usage is correct or not. This is not possible with a private language, because it cannot be compared to anything.

In any case, the bodily upset theories propagated by James et al. did not account for lasting or dispositional emotions, where no external stimulus occurred or persisted. They could not explain on what grounds do we judge emotions as appropriate or perverse, justified or not, rational or irrational, realistic or fantastic. If emotions were nothing but involuntary reactions, contingent upon external events, devoid of context - then how come we perceive drug induced anxiety, or intestinal spasms in a detached way, not as we do emotions? Putting the emphasis on sorts of behavior (as the behaviorists do) shifts the focus to the public, shared aspect of emotions but miserably fails to account for their private, pronounced, dimension. It is possible, after all, to experience emotions without expressing them (=without behaving). Additionally, the repertory of emotions available to us is much larger than the repertory of behaviours. Emotions are subtler than actions and cannot be fully conveyed by them. We find even human language an inadequate conduit for these complex phenomena.

To say that emotions are cognitions is to say nothing. We understand cognition even less than we understand emotions (with the exception of the mechanics of cognition). To say that emotions are caused by cognitions or cause cognitions (emotivism) or are part of a motivational process - does not answer the question: "What are emotions?". Emotions do cause us to apprehend and perceive things in a certain way and even to act accordingly. But WHAT are emotions? Granted, there are strong, perhaps necessary, connections between emotions and knowledge and, in this respect, emotions are ways of perceiving the world and interacting with it. Perhaps emotions are even rational strategies of adaptation and survival and not stochastic, isolated inter-psychic events. Perhaps Plato was wrong in saying that emotions conflict with reason and thus obscure the right way of apprehending reality. Perhaps he is right: fears do become phobias, emotions do depend on one's experience and character. As we have it in psychoanalysis, emotions may be reactions to the unconscious rather than to the world. Yet, again, Sartre may be right in saying that emotions are a "modus vivendi", the way we "live" the world, our perceptions coupled with our bodily reactions. He wrote: "(we live the world) as though the relations between things were governed not by deterministic processes but by magic". Even a rationally grounded emotion (fear which generates flight from a source of danger) is really a magical transformation (the ersatz elimination of that source). Emotions sometimes mislead. People may perceive the same, analyze the same, evaluate the situation the same, respond along the same vein - and yet have different emotional reactions. It does not seem necessary (even if it were sufficient) to postulate the existence of "preferred" cognitions - those that enjoy an "overcoat" of emotions. Either all cognitions generate emotions, or none does. But, again, WHAT are emotions?

We all possess some kind of sense awareness, a perception of objects and states of things by sensual means. Even a dumb, deaf and blind person still possesses proprioception (perceiving the position and motion of one's limbs). Sense awareness does not include introspection because the subject of introspection is supposed to be mental, unreal, states. Still, if mental states are a misnomer and really we are dealing with internal, physiological, states, then introspection should form an important part of sense awareness. Specialized organs mediate the impact of external objects upon our senses and distinctive types of experience arise as a result of this mediation.


 


Perception is thought to be comprised of the sensory phase - its subjective aspect - and of the conceptual phase. Clearly sensations come before thoughts or beliefs are formed. Suffice it to observe children and animals to be convinced that a sentient being does not necessarily have to have beliefs. One can employ the sense modalities or even have sensory-like phenomena (hunger, thirst, pain, sexual arousal) and, in parallel, engage in introspection because all these have an introspective dimension. It is inevitable: sensations are about how objects feel like, sound, smell and seen to us. The sensations "belong", in one sense, to the objects with which they are identified. But in a deeper, more fundamental sense, they have intrinsic, introspective qualities. This is how we are able to tell them apart. The difference between sensations and propositional attitudes is thus made very clear. Thoughts, beliefs, judgements and knowledge differ only with respect to their content (the proposition believed/judged/known, etc.) and not in their intrinsic quality or feel. Sensations are exactly the opposite: differently felt sensations may relate to the same content. Thoughts can also be classified in terms of intentionality (they are "about" something) - sensations only in terms of their intrinsic character. They are, therefore, distinct from discursive events (such as reasoning, knowing, thinking, or remembering) and do not depend upon the subject's intellectual endowments (like his power to conceptualize). In this sense, they are mentally "primitive" and probably take place at a level of the psyche where reason and thought have no recourse.

The epistemological status of sensations is much less clear. When we see an object, are we aware of a "visual sensation" in addition to being aware of the object? Perhaps we are only aware of the sensation, wherefrom we infer the existence of an object, or otherwise construct it mentally, indirectly? This is what, the Representative Theory tries to persuade us, the brain does upon encountering the visual stimuli emanating from a real, external object. The Naive Realists say that one is only aware of the external object and that it is the sensation that we infer. This is a less tenable theory because it fails to explain how do we directly know the character of the pertinent sensation.

What is indisputable is that sensation is either an experience or a faculty of having experiences. In the first case, we have to introduce the idea of sense data (the objects of the experience) as distinct from the sensation (the experience itself). But isn't this separation artificial at best? Can sense data exist without sensation? Is "sensation" a mere structure of the language, an internal accusative? Is "to have a sensation" equivalent to "to strike a blow" (as some dictionaries of philosophy have it)? Moreover, sensations must be had by subjects. Are sensations objects? Are they properties of the subjects that have them? Must they intrude upon the subject's consciousness in order to exist - or can they exist in the "psychic background" (for instance, when the subject is distracted)? Are they mere representations of real events (is pain a representation of injury)? Are they located? We know of sensations when no external object can be correlated with them or when we deal with the obscure, the diffuse, or the general. Some sensations relate to specific instances - others to kinds of experiences. So, in theory, the same sensation can be experienced by several people. It would be the same KIND of experience - though, of course, different instances of it. Finally, there are the "oddball" sensations, which are neither entirely bodily - nor entirely mental. The sensations of being watched or followed are two examples of sensations with both components clearly intertwined.

Feeling is a "hyper-concept" which is made of both sensation and emotion. It describes the ways in which we experience both our world and our selves. It coincides with sensations whenever it has a bodily component. But it is sufficiently flexible to cover emotions and attitudes or opinions. But attaching names to phenomena never helped in the long run and in the really important matter of understanding them. To identify feelings, let alone to describe them, is not an easy task. It is difficult to distinguish among feelings without resorting to a detailed description of causes, inclinations and dispositions. In addition, the relationship between feeling and emotions is far from clear or well established. Can we emote without feeling? Can we explain emotions, consciousness, even simple pleasure in terms of feeling? Is feeling a practical method, can it be used to learn about the world, or about other people? How do we know about our own feelings?

Instead of throwing light on the subject, the dual concepts of feeling and sensation seem to confound matters even further. A more basic level needs to be broached, that of sense data (or sensa, as in this text).

Sense data are entities cyclically defined. Their existence depends upon being sensed by a sensor equipped with senses. Yet, they define the senses to a large extent (imagine trying to define the sense of vision without visuals). Ostensibly, they are entities, though subjective. Allegedly, they possess the properties that we perceive in an external object (if it is there), as it appears to have them. In other words, though the external object is perceived, what we really get in touch with directly, what we apprehend without mediation - are the subjective sensa. What is (probably) perceived is merely inferred from the sense data. In short, all our empirical knowledge rests upon our acquaintance with sensa. Every perception has as its basis pure experience. But the same can be said about memory, imagination, dreams, hallucinations. Sensation, as opposed to these, is supposed to be error free, not subject to filtering or to interpretation, special, infallible, direct and immediate. It is an awareness of the existence of entities: objects, ideas, impressions, perceptions, even other sensations. Russell and Moore said that sense data have all (and only) the properties that they appear to have and can only be sensed by one subject. But these all are idealistic renditions of senses, sensations and sensa. In practice, it is notoriously difficult to reach a consensus regarding the description of sense data or to base any meaningful (let alone useful) knowledge of the physical world on them. There is a great variance in the conception of sensa. Berkeley, ever the incorrigible practical Briton, said that sense data exist only if and when sensed or perceived by us. Nay, their very existence IS their being perceived or sensed by us. Some sensa are public or part of lager assemblages of sensa. Their interaction with the other sensa, parts of objects, or surfaces of objects may distort the inventory of their properties. They may seem to lack properties that they do possess or to possess properties that can be discovered only upon close inspection (not immediately evident). Some sense data are intrinsically vague. What is a striped pajama? How many stripes does it contain? We do not know. It is sufficient to note (=to visually sense) that it has stripes all over. Some philosophers say that if a sense data can be sensed then they possibly exist. These sensa are called the sensibilia (plural of sensibile). Even when not actually perceived or sensed, objects consist of sensibilia. This makes sense data hard to differentiate. They overlap and where one begins may be the end of another. Nor is it possible to say if sensa are changeable because we do not really know WHAT they are (objects, substances, entities, qualities, events?).


 


Other philosophers suggested that sensing is an act directed at the objects called sense data. Other hotly dispute this artificial separation. To see red is simply to see in a certain manner, that is: to see redly. This is the adverbial school. It is close to the contention that sense data are nothing but a linguistic convenience, a noun, which enables us to discuss appearances. For instance, the "Gray" sense data is nothing but a mixture of red and sodium. Yet we use this convention (gray) for convenience and efficacy's sakes.

B. The Evidence

An important facet of emotions is that they can generate and direct behaviour. They can trigger complex chains of actions, not always beneficial to the individual. Yerkes and Dodson observed that the more complex a task is, the more emotional arousal interferes with performance. In other words, emotions can motivate. If this were their only function, we might have determined that emotions are a sub-category of motivations.

Some cultures do not have a word for emotion. Others equate emotions with physical sensations, a-la James-Lange, who said that external stimuli cause bodily changes which result in emotions (or are interpreted as such by the person affected). Cannon and Bard differed only in saying that both emotions and bodily responses were simultaneous. An even more far-fetched approach (Cognitive Theories) was that situations in our environment foster in us a GENERAL state of arousal. We receive clues from the environment as to what we should call this general state. For instance, it was demonstrated that facial expressions can induce emotions, apart from any cognition.

A big part of the problem is that there is no accurate way to verbally communicate emotions. People are either unaware of their feelings or try to falsify their magnitude (minimize or exaggerate them). Facial expressions seem to be both inborn and universal. Children born deaf and blind use them. They must be serving some adaptive survival strategy or function. Darwin said that emotions have an evolutionary history and can be traced across cultures as part of our biological heritage. Maybe so. But the bodily vocabulary is not flexible enough to capture the full range of emotional subtleties humans are capable of. Another nonverbal mode of communication is known as body language: the way we move, the distance we maintain from others (personal or private territory). It expresses emotions, though only very crass and raw ones.

And there is overt behaviour. It is determined by culture, upbringing, personal inclination, temperament and so on. For instance: women are more likely to express emotions than men when they encounter a person in distress. Both sexes, however, experience the same level of physiological arousal in such an encounter. Men and women also label their emotions differently. What men call anger - women call hurt or sadness. Men are four times more likely than women to resort to violence. Women more often than not will internalize aggression and become depressed.

Efforts at reconciling all these data were made in the early eighties. It was hypothesized that the interpretation of emotional states is a two phased process. People respond to emotional arousal by quickly "surveying" and "appraising" (introspectively) their feelings. Then they proceed to search for environmental cues to support the results of their assessment. They will, thus, tend to pay more attention to internal cues that agree with the external ones. Put more plainly: people will feel what they expect to feel.

Several psychologists have shown that feelings precede cognition in infants. Animals also probably react before thinking. Does this mean that the affective system reacts instantaneously, without any of the appraisal and survey processes that were postulated? If this were the case, then we merely play with words: we invent explanations to label our feelings AFTER we fully experience them. Emotions, therefore, can be had without any cognitive intervention. They provoke unlearned bodily patterns, such as the aforementioned facial expressions and body language. This vocabulary of expressions and postures is not even conscious. When information about these reactions reaches the brain, it assigns to them the appropriate emotion. Thus, affect creates emotion and not vice versa.

Sometimes, we hide our emotions in order to preserve our self-image or not to incur society's wrath. Sometimes, we are not aware of our emotions and, as a result, deny or diminish them.

C. An Integrative Platform - A Proposal

(The terminology used in this chapter is explored in the previous ones.)

The use of one word to denote a whole process was the source of misunderstandings and futile disputations. Emotions (feelings) are processes, not events, or objects. Throughout this chapter, I will, therefore, use the term "Emotive Cycle".

The genesis of the Emotive Cycle lies in the acquisition of Emotional Data. In most cases, these are made up of Sense Data mixed with data related to spontaneous internal events. Even when no access to sensa is available, the stream of internally generated data is never interrupted. This is easily demonstrated in experiments involving sensory deprivation or with people who are naturally sensorily deprived (blind, deaf and dumb, for instance). The spontaneous generation of internal data and the emotional reactions to them are always there even in these extreme conditions. It is true that, even under severe sensory deprivation, the emoting person reconstructs or evokes past sensory data. A case of pure, total, and permanent sensory deprivation is nigh impossible. But there are important philosophical and psychological differences between real life sense data and their representations in the mind. Only in grave pathologies is this distinction blurred: in psychotic states, when experiencing phantom pains following the amputation of a limb or in the case of drug induced images and after images. Auditory, visual, olfactory and other hallucinations are breakdowns of normal functioning. Normally, people are well aware of and strongly maintain the difference between objective, external, sense data and the internally generated representations of past sense data.


 


The Emotional Data are perceived by the emoter as stimuli. The external, objective component has to be compared to internally maintained databases of previous such stimuli. The internally generated, spontaneous or associative data, have to be reflected upon. Both needs lead to introspective (inwardly directed) activity. The product of introspection is the formation of qualia. This whole process is unconscious or subconscious.

If the person is subject to functioning psychological defense mechanisms (e.g., repression, suppression, denial, projection, projective identification) - qualia formation will be followed by immediate action. The subject - not having had any conscious experience - will not be aware of any connection between his actions and preceding events (sense data, internal data and the introspective phase). He will be at a loss to explain his behaviour, because the whole process did not go through his consciousness. To further strengthen this argument, we may recall that hypnotized and anaesthetized subjects are not likely to act at all even in the presence of external, objective, sensa. Hypnotized people are likely to react to sensa introduced to their consciousness by the hypnotist and which had no existence, whether internal or external, prior to the hypnotist's suggestion. It seems that feeling, sensation and emoting exist only if they pass through consciousness. This is true even where no data of any kind are available (such as in the case of phantom pains in long amputated limbs). But such bypasses of consciousness are the less common cases.

More commonly, qualia formation will be followed by Feeling and Sensation. These will be fully conscious. They will lead to the triple processes of surveying, appraisal/evaluation and judgment formation. When repeated often enough judgments of similar data coalesce to form attitudes and opinions. The patterns of interactions of opinions and attitudes with our thoughts (cognition) and knowledge, within our conscious and unconscious strata, give rise to what we call our personality. These patterns are relatively rigid and are rarely influenced by the outside world. When maladaptive and dysfunctional, we talk about personality disorders.

Judgements contain, therefore strong emotional, cognitive and attitudinal elements which team up to create motivation. The latter leads to action, which both completes one emotional cycle and starts another. Actions are sense data and motivations are internal data, which together form a new chunk of emotional data.

Emotional cycles can be divided to Phrastic nuclei and Neustic clouds (to borrow a metaphor from physics). The Phrastic Nucleus is the content of the emotion, its subject matter. It incorporates the phases of introspection, feeling/sensation, and judgment formation. The Neustic cloud involves the ends of the cycle, which interface with the world: the emotional data, on the one hand and the resulting action on the other.

We started by saying that the Emotional Cycle is set in motion by Emotional Data, which, in turn, are comprised of sense data and internally generated data. But the composition of the Emotional Data is of prime importance in determining the nature of the resulting emotion and of the following action. If more sense data (than internal data) are involved and the component of internal data is weak in comparison (it is never absent) - we are likely to experience Transitive Emotions. The latter are emotions, which involve observation and revolve around objects. In short: these are "out-going" emotions, that motivate us to act to change our environment.

Yet, if the emotional cycle is set in motion by Emotional Data, which are composed mainly of internal, spontaneously generated data - we will end up with Reflexive Emotions. These are emotions that involve reflection and revolve around the self (for instance, autoerotic emotions). It is here that the source of psychopathologies should be sought: in this imbalance between external, objective, sense data and the echoes of our mind.


 

next: The Murder of Oneself

APA Reference
Vaknin, S. (2008, January 13). The Manifold of Sense, HealthyPlace. Retrieved on 2024, October 2 from https://www.healthyplace.com/personality-disorders/malignant-self-love/manifold-of-sense

Last Updated: July 4, 2018

The Twelve Steps of Co-Dependents Anonymous: Step Eight

Made a list of all persons we had harmed, and became willing to make amends to them all.


My newfound attitude and direction in life meant that I needed to make a list of the people who had been devastated by my past attitudes and actions.

I reached as far back into my past as I could. I worked to recall all my relationships, beginning with mom and dad, brothers and sisters, grandparents, childhood friends, baby-sitters, kindergarten friends, teachers, church friends, ministers and pastors, neighborhood friends, friends of my parents—any one with whom I had interacted in my formative years, because all these relationships held meaning and keys as to why my adult relationships were going wrong.

Of course, as I reached my teens, I developed more relationships: school friends (and enemies) school teachers, girl friends, class mates, coaches, teammates, principles, etc. And family relationships changed and redefined as I grew older: parents, grandparents, aunts, uncles, and cousins. These had to be re-examined during each phase of my life.

Then came college and marriage: teachers, students, fellow-students, fraternity friends, dorm friends, serious girl friends, mentors, unmarried friends, married friends, and my wife.

Next were in-laws, children, co-workers, employees, employers, more adult friends, older friends from the previous generation, younger friends from subsequent generations, buddies, wife's friends, wife's extended family, in-law's friends, business associates, business mentors, therapists, recovery friends, and God.

The last name I put on the list was my own.

In each of these relationships, my co-dependent behaviors had manifested in one way or another. Usually through being a know-it-all, domineering, my-way-or-the-highway, type of person. I had acted out of my fear-based and shame-based protectiveness. I found some manifestation of my Step Four inventory in each relationship I'd listed. I had indeed hurt others (many others) and myself.


continue story below

Some of these people were dead. Some of them I had no way of finding. Some of them didn't want me to find them. I put all their names on the list anyway, because a key to working Step Eight is making the list.

I used the list to discover how I had hurt each relationship, because these were clues to myself and my codependence. These were issues I wanted to overcome. These were issues I wanted to deal with. I wanted to understand the dynamics of these relationships and get past the shame, guilt, despair and turmoil I had helped create in them.

A second key to Step Eight is that I was willing to make the amends.

I was willing to admit the mistakes I'd made. I was willing to change. I was willing to try again. I was willing to discover how to create better relationships, based on healthier premises and boundaries.

Step Eight is as much self-examination as it is relationship examination. Step Eight is about learning who I was and who I am, so that future relationships do not become more emeshed attempts for me to recreate the past and deal with my past yet again in unhealthy ways.

Step Eight is looking at my past, gratefully accepting it, learning from it, and choosing to create healthier relationships in the present.

next: The Twelve Steps of Co-Dependents Anonymous Step Nine

APA Reference
Staff, H. (2008, January 13). The Twelve Steps of Co-Dependents Anonymous: Step Eight, HealthyPlace. Retrieved on 2024, October 2 from https://www.healthyplace.com/relationships/serendipity/twelve-steps-of-co-dependents-anonymous-step-eight

Last Updated: August 7, 2014

Form and Malignant Form The Metaphorically Correct Artist

and other Romanticist Mutations

Every type of human activity has a malignant equivalent.

The pursuit of happiness, the accumulation of wealth, the exercise of power, the love of one's self are all tools in the struggle to survive and, as such, are commendable. They do, however, have malignant counterparts: pursuing pleasures (hedonism), greed and avarice as manifested in criminal activities, murderous authoritarian regimes and narcissism.

What separates the malignant versions from the benign ones?

Phenomenologically, they are difficult to tell apart. In which way is a criminal distinct from a business tycoon? Many will say that there is no distinction. Still, society treats the two differently and has set up separate social institutions to accommodate these two human types and their activities.

Is it merely a matter of ethical or philosophical judgement? I think not.

The difference seems to lie in the context. Granted, the criminal and the businessman both have the same motivation (at times, obsession): to make money. Sometimes they both employ the same techniques and adopt the same venues of action. But in which social, moral, philosophical, ethical, historical and biographical contexts do they operate?

A closer examination of their exploits exposes the unbridgeable gap between them. The criminal acts only in the pursuit of money. He has no other considerations, thoughts, motives and emotions, no temporal horizon, no ulterior or external aims, no incorporation of other humans or social institutions in his deliberations. The reverse is true for the businessman. The latter is aware of the fact that he is part of a larger fabric, that he has to obey the law, that some things are not permissible, that sometimes he has to lose sight of moneymaking for the sake of higher values, institutions, or the future. In short: the criminal is a solipsist - the businessman, a socially integrated integrated. The criminal is one track minded - the businessman is aware of the existence of others and of their needs and demands. The criminal has no context - the businessman does ("political animal").

Whenever a human activity, a human institution, or a human thought is refined, purified, reduced to its bare minimum - malignancy ensues. Leukaemia is characterized by the exclusive production of one category of blood cells (the white ones) by the bone marrow - while abandoning the production of others. Malignancy is reductionist: do one thing, do it best, do it more and most, compulsively pursue one course of action, one idea, never mind the costs. Actually, no costs are admitted - because the very existence of a context is denied, or ignored. Costs are brought on by conflict and conflict entails the existence of at least two parties. The criminal does not include in his weltbild the Other. The dictator doesn't suffer because suffering is brought on by recognizing the other (empathy). The malignant forms are sui generis, they are dang am sich, they are categorical, they do not depend on the outside for their existence.

Put differently: the malignant forms are functional but meaningless.

Let us use an illustration to understand this dichotomy:

In France there is a man who made it his life's mission to spit the furthest a human has ever spat. This way he made it into the Guinness Book of Records (GBR). After decades of training, he succeeded to spit to the longest distance a man has ever spat and was included in the GBR under miscellany.

The following can be said about this man with a high degree of certainty:

  1. The Frenchman had a purposeful life in the sense that his life had a well-delineated, narrowly focused, and achievable target, which permeated his entire life and defined them.
  2. He was a successful man in that he fulfilled his main ambition in life to the fullest. We can rephrase this sentence by saying that he functioned well.
  3. He probably was a happy, content, and satisfied man as far as his main theme in life is concerned.
  4. He achieved significant outside recognition and affirmation of his achievements.
  5. This recognition and affirmation is not limited in time and place

In other words, he became "part of the history".

But how many of us would say that he led a meaningful life? How many would be willing to attribute meaning to his spitting efforts? Not many. His life would look to most of us ridiculous and bereft of meaning.

This judgement is facilitated by comparing his actual history with his potential or possible history. In other words, we derive the sense of meaninglessness partly from comparing his spitting career with what he could have done and achieved had he invested the same time and efforts differently.

He could have raised children, for instance. This is widely considered a more meaningful activity. But why? What makes child rearing more meaningful than distance spitting?

The answer is: common agreement. No philosopher, scientist, or publicist can rigorously establish a hierarchy of the meaningfulness of human actions.


 


There are two reasons for this inability:

  1. There is no connection between function (functioning, functionality) and meaning (meaninglessness, meaningfulness).
  2. There are different interpretations of the word "Meaning" and, yet, people use them interchangeably, obscuring the dialogue.

People often confuse Meaning and Function. When asked what is the meaning of their life they respond by using function-laden phrases. They say: "This activity lends taste (=one interpretation of meaning) to my life", or: "My role in this world is this and, once finished, I will be able to rest in pace, to die". They attach different magnitudes of meaningfulness to various human activities.

Two things are evident:

  1. That people use the word "Meaning" not in its philosophically rigorous form. What they mean is really the satisfaction, even the happiness that comes with successful functioning. They want to continue to live when they are flooded by these emotions. They confuse this motivation to live on with the meaning of life. Put differently, they confuse the "why" with the "what for". The philosophical assumption that life has a meaning is a teleological one. Life - regarded linearly as a "progress bar" - proceeds towards something, a final horizon, an aim. But people relate only to what "makes them tick", the pleasure that they derive from being more or less successful in what they set out to do.
  2. Either the philosophers are wrong in that they do not distinguish between human activities (from the point of view of their meaningfulness) or people are wrong in that they do. This apparent conflict can be resolved by observing that people and philosophers use different interpretations of the word "Meaning".

To reconcile these antithetical interpretations, it is best to consider three examples:

Assuming there were a religious man who established a new church of which only he was a member.

Would we have said that his life and actions are meaningful?

Probably not.

This seems to imply that quantity somehow bestows meaning. In other words, that meaning is an emergent phenomenon (epiphenomenon). Another right conclusion would be that meaning depends on the context. In the absence of worshippers, even the best run, well-organized, and worthy church might look meaningless. The worshippers - who are part of the church - also provide the context.

This is unfamiliar territory. We are used to associate context with externality. We do not think that our organs provide us with context, for instance (unless we are afflicted by certain mental disturbances). The apparent contradiction is easily resolved: to provide context, the provider of the context provider must be either external - or with the inherent, independent capacity to be so.

The churchgoers do constitute the church - but they are not defined by it, they are external to it and they are not dependent on it. This externality - whether as a trait of the providers of context, or as a feature of an emergent phenomenon - is all-important. The very meaning of the system is derived from it.

A few more examples to support this approach:

Imagine a national hero without a nation, an actor without an audience, and an author without (present or future) readers. Does their work have any meaning? Not really. The external perspective again proves all-important.

There is an added caveat, an added dimension here: time. To deny a work of art any meaning, we must know with total assurance that it will never be seen by anyone. Since this is an impossibility (unless it is to be destroyed) - a work of art has undeniable, intrinsic meaning, a result of the mere potential to be seen by someone, sometime, somewhere. This potential of a " single gaze" is sufficient to endow the work of art with meaning.

To a large extent, the heroes of history, its main characters, are actors with a stage and audience larger than usual. The only difference might be that future audiences often alter the magnitude of their "art": it is either diminished or magnified in the eyes of history.

The third example - originally brought up by Douglas Hofstadter in his magnificent opus "Godel, Escher, Bach - An Eternal Golden Braid" - is genetic material (DNA). Without the right "context" (amino acids) - it has no "meaning" (it does not lead to the production of proteins, the building blocks of the organism encoded in the DNA). To illustrate his point, the author sends DNA on a trip to outer space, where aliens would find it impossible to decipher it (=to understand its meaning).

By now it would seem clear that for a human activity, institution or idea to be meaningful, a context is needed. Whether we can say the same about things natural remains to be seen. Being humans, we tend to assume a privileged status. As in certain metaphysical interpretations of classical quantum mechanics, the observer actively participates in the determination of the world. There would be no meaning if there were no intelligent observers - even if the requirement of context was satisfied (part of the "anthropic principle").


 


In other words, not all contexts were created equal. A human observer is needed to determine the meaning, this is an unavoidable constraint. Meaning is the label we give to the interaction between an entity (material or spiritual) and its context (material or spiritual). So, the human observer is forced to evaluate this interaction in order to extract the meaning. But humans are not identical copies, or clones. They are liable to judge the same phenomena differently, dependent upon their vantage point. They are the product of their nature and nurture, the highly specific circumstances of their lives and their idiosyncrasies.

In an age of moral and ethical relativism, a universal hierarchy of contexts is not likely to go down well with the gurus of philosophy. But we are talking about the existence of hierarchies as numerous as the number of observers. This is a notion so intuitive, so embedded in human thinking and behaviour that to ignore it would amount to ignoring reality.

People (observers) have privileged systems of attribution of meaning. They constantly and consistently prefer certain contexts to others in the detection of meaning and the set of its possible interpretations. This set would have been infinite were it not for these preferences. The context preferred, arbitrarily excludes and disallows certain interpretations (and, therefore, certain meanings).

The benign form is, therefore, the acceptance of a plurality of contexts and of the resulting meanings.

The malignant form is to adopt (and, then, impose) a universal hierarchy of contexts with a Master Context which bestows meaning upon everything. Such malignant systems of thought are easily recognizable because they claim to be comprehensive, invariant and universal. In plain language, these thought systems pretend to explain everything, everywhere and in a way not dependent on specific circumstances. Religion is like that and so are most modern ideologies. Science tries to be different and sometimes succeeds. But humans are frail and frightened and they much prefer malignant systems of thinking because they give them the illusion of gaining absolute power through absolute, immutable knowledge.

Two contexts seem to compete for the title of Master Context in human history, the contexts which endow all meanings, permeate all aspects of reality, are universal, invariant, define truth values and solve all moral dilemmas: the Rational and the Affective (emotions).

We live in an age that despite its self-perception as rational is defined and influenced by the emotional Master Context. This is called Romanticism - the malignant form of "being tuned" to one's emotions. It is a reaction to the "cult of idea" which characterized the Enlightenment (Belting, 1998).

Romanticism is the assertion that all human activities are founded on and directed by the individual and his emotions, experience, and mode of expression. As Belting (1998) notes, this gave rise to the concept of the "masterpiece" - an absolute, perfect, unique (idiosyncratic) work by an immediately recognizable and idealized artist.

This relatively novel approach (in historical terms) has permeated human activities as diverse as politics, the formation of families, and art.

Families were once constructed on purely totalitarian bases. Family formation was a transaction, really, involving considerations both financial and genetic. This was substituted (during the 18th century) by love as the main motivation and foundation. Inevitably, this led to the disintegration and to the metamorphosis of the family. To establish a sturdy social institution on such a fickle basis was an experiment doomed to failure.

Romanticism infiltrated the body politic as well. All major political ideologies and movements of the 20th century had romanticist roots, Nazism more than most. Communism touted the ideals of equality and justice while Nazism was a quasi-mythological interpretation of history. Still, both were highly romantic movements.

Politicians were and to a lesser degree today are expected to be extraordinary in their personal lives or in their personality traits. Biographies are recast by image and public relations experts ("spin doctors") to fit this mould. Hitler was, arguably, the most romantic of all world leaders, closely followed by other dictators and authoritarian figures.

It is a cliché to say that, through politicians, we re-enact our relationships with our parents. Politicians are often perceived to be father figures. But Romanticism infantilized this transference. In politicians we want to see not the wise, level headed, ideal father but our actual parents: capriciously unpredictable, overwhelming, powerful, unjust, protecting, and awe-inspiring. This is the romanticist view of leadership: anti-Webberian, anti bureaucratic, chaotic. And this set of predilections, later transformed to social dictates, has had a profound effect on the history of the 20th century.

Romanticism manifested in art through the concept of Inspiration. An artist had to have it in order to create. This led to a conceptual divorce between art and artisanship.

As late as the 18th century, there was no difference between these two classes of creative people, the artists and the artisans. Artists accepted commercial orders which included thematic instructions (the subject, choice of symbols, etc.), delivery dates, prices, etc. Art was a product, almost a commodity, and was treated as such by others (examples: Michelangelo, Leonardo da Vinci, Mozart, Goya, Rembrandt and thousands of artists of similar or lesser stature). The attitude was completely businesslike, creativity was mobilized in the service of the marketplace.

Moreover, artists used conventions - more or less rigid, depending on the period - to express emotions. They traded in emotional expressions where others traded in spices, or engineering skills. But they were all traders and were proud of their artisanship. Their personal lives were subject to gossip, condemnation or admiration but were not considered to be a precondition, an absolutely essential backdrop, to their art.


 


The romanticist view of the artist painted him into a corner. His life and art became inextricable. Artists were expected to transmute and transubstantiate their lives as well as the physical materials that they dealt with. Living (the kind of life, which is the subject of legends or fables) became an art form, at times predominantly so.

It is interesting to note the prevalence of romanticist ideas in this context: Weltschmerz, passion, self destruction were considered fit for the artist. A "boring" artist would never sell as much as a "romantically-correct" one. Van Gogh, Kafka and James Dean epitomize this trend: they all died young, lived in misery, endured self-inflicted pains, and ultimate destruction or annihilation. To paraphrase Sontag, their lives became metaphors and they all contracted the metaphorically correct physical and mental illnesses of their day and age: Kafka developed tuberculosis, Van Gogh was mentally ill, James Dean died appropriately in an accident. In an age of social anomies, we tend to appreciate and rate highly the anomalous. Munch and Nietzsche will always be preferable to more ordinary (but perhaps equally creative) people.

Today there is an anti-romantic backlash (divorce, the disintegration of the romantic nation-state, the death of ideologies, the commercialization and popularization of art). But this counter-revolution tackles the external, less substantial facets of Romanticism. Romanticism continues to thrive in the flourishing of mysticism, of ethnic lore, and of celebrity worship. It seems that Romanticism has changed vessels but not its cargo.

We are afraid to face the fact that life is meaningless unless WE observe it, unless WE put it in context, unless WE interpret it. WE feel burdened by this realization, terrified of making the wrong moves, of using the wrong contexts, of making the wrong interpretations.

We understand that there is no constant, unchanged, everlasting meaning to life, and that it all really depends on us. We denigrate this kind of meaning. A meaning that is derived by people from human contexts and experiences is bound to be a very poor approximation to the ONE, TRUE meaning. It is bound to be asymptotic to the Grand Design. It might well be - but this is all we have got and without it our lives will indeed prove meaningless.


 

next: The Manifold of Sense

APA Reference
Vaknin, S. (2008, January 12). Form and Malignant Form The Metaphorically Correct Artist, HealthyPlace. Retrieved on 2024, October 2 from https://www.healthyplace.com/personality-disorders/malignant-self-love/form-and-malignant-form-the-metap

Last Updated: July 4, 2018

The Madness of Playing Games

If a lone, unkempt, person, standing on a soapbox were to say that he should become the Prime Minister, he would have been diagnosed by a passing psychiatrist as suffering from this or that mental disturbance. But were the same psychiatrist to frequent the same spot and see a crowd of millions saluting the same lonely, shabby figure - what would have his diagnosis been? Surely, different (perhaps of a more political hue).

It seems that one thing setting social games apart from madness is quantitative: the amount of the participants involved. Madness is a one-person game, and even mass mental disturbances are limited in scope. Moreover, it has long been demonstrated (for instance, by Karen Horney) that the definition of certain mental disorders is highly dependent upon the context of the prevailing culture. Mental disturbances (including psychoses) are time-dependent and locus-dependent. Religious behaviour and romantic behaviour could be easily construed as psychopathologies when examined out of their social, cultural, historical and political contexts.

Historical figures as diverse as Nietzsche (philosophy), Van Gogh (art), Hitler (politics) and Herzl (political visionary) made this smooth phase transition from the lunatic fringes to centre stage. They succeeded to attract, convince and influence a critical human mass, which provided for this transition. They appeared on history's stage (or were placed there posthumously) at the right time and in the right place. The biblical prophets and Jesus are similar examples though of a more severe disorder. Hitler and Herzl possibly suffered from personality disorders - the biblical prophets were, almost certainly, psychotic.

We play games because they are reversible and their outcomes are reversible. No game-player expects his involvement, or his particular moves to make a lasting impression on history, fellow humans, a territory, or a business entity. This, indeed, is the major taxonomic difference: the same class of actions can be classified as "game" when it does not intend to exert a lasting (that is, irreversible) influence on the environment. When such intention is evident - the very same actions qualify as something completely different. Games, therefore, are only mildly associated with memory. They are intended to be forgotten, eroded by time and entropy, by quantum events in our brains and macro-events in physical reality.

Games - as opposed to absolutely all other human activities - are entropic. Negentropy - the act of reducing entropy and increasing order - is present in a game, only to be reversed later. Nowhere is this more evident than in video games: destructive acts constitute the very foundation of these contraptions. When children start to play (and adults, for that matter - see Eric Berne's books on the subject) they commence by dissolution, by being destructively analytic. Playing games is an analytic activity. It is through games that we recognize our temporariness, the looming shadow of death, our forthcoming dissolution, evaporation, annihilation.

These FACTS we repress in normal life - lest they overwhelm us. A frontal recognition of them would render us speechless, motionless, paralysed. We pretend that we are going to live forever, we use this ridiculous, counter-factual assumption as a working hypothesis. Playing games lets us confront all this by engaging in activities which, by their very definition, are temporary, have no past and no future, temporally detached and physically detached. This is as close to death as we get.

Small wonder that rituals (a variant of games) typify religious activities. Religion is among the few human disciplines which tackle death head on, sometimes as a centrepiece (consider the symbolic sacrifice of Jesus). Rituals are also the hallmark of obsessive-compulsive disorders, which are the reaction to the repression of forbidden emotions (our reaction to the prevalence, pervasiveness and inevitability of death is almost identical). It is when we move from a conscious acknowledgement of the relative lack of lasting importance of games - to the pretension that they are important, that we make the transition from the personal to the social.

The way from madness to social rituals traverses games. In this sense, the transition is from game to myth. A mythology is a closed system of thought, which defines the "permissible" questions, those that can be asked. Other questions are forbidden because they cannot be answered without resorting to another mythology altogether.

Observation is an act, which is the anathema of the myth. The observer is presumed to be outside the observed system (a presumption which, in itself, is part of the myth of Science, at least until the Copenhagen Interpretation of Quantum Mechanics was developed).

A game looks very strange, unnecessary and ridiculous from the vantage-point of an outside observer. It has no justification, no future, it looks aimless (from the utilitarian point of view), it can be compared to alternative systems of thought and of social organization (the biggest threat to any mythology). When games are transformed to myths, the first act perpetrated by the group of transformers is to ban all observations by the (willing or unwilling) participants.

Introspection replaces observation and becomes a mechanism of social coercion. The game, in its new guise, becomes a transcendental, postulated, axiomatic and doctrinaire entity. It spins off a caste of interpreters and mediators. It distinguishes participants (formerly, players) from outsiders or aliens (formerly observers or uninterested parties). And the game loses its power to confront us with death. As a myth it assumes the function of repression of this fact and of the fact that we are all prisoners. Earth is really a death ward, a cosmic death row: we are all trapped here and all of us are sentenced to die.


 


Today's telecommunications, transportation, international computer networks and the unification of the cultural offering only serve to exacerbate and accentuate this claustrophobia. Granted, in a few millennia, with space travel and space habitation, the walls of our cells will have practically vanished (or become negligible) with the exception of the constraint of our (limited) longevity. Mortality is a blessing in disguise because it motivates humans to act in order "not to miss the train of life" and it maintains the sense of wonder and the (false) sense of unlimited possibilities.

This conversion from madness to game to myth is subjected to meta-laws that are the guidelines of a super-game. All our games are derivatives of this super-game of survival. It is a game because its outcomes are not guaranteed, they are temporary and to a large extent not even known (many of our activities are directed at deciphering it). It is a myth because it effectively ignores temporal and spatial limitations. It is one-track minded: to foster an increase in the population as a hedge against contingencies, which are outside the myth.

All the laws, which encourage optimization of resources, accommodation, an increase of order and negentropic results - belong, by definition to this meta-system. We can rigorously claim that there exist no laws, no human activities outside it. It is inconceivable that it should contain its own negation (Godel-like), therefore it must be internally and externally consistent. It is as inconceivable that it will be less than perfect - so it must be all-inclusive. Its comprehensiveness is not the formal logical one: it is not the system of all the conceivable sub-systems, theorems and propositions (because it is not self-contradictory or self-defeating). It is simply the list of possibilities and actualities open to humans, taking their limitations into consideration. This, precisely, is the power of money. It is - and always has been - a symbol whose abstract dimension far outweighed its tangible one.

This bestowed upon money a preferred status: that of a measuring rod. The outcomes of games and myths alike needed to be monitored and measured. Competition was only a mechanism to secure the on-going participation of individuals in the game. Measurement was an altogether more important element: the very efficiency of the survival strategy was in question. How could humanity measure the relative performance (and contribution) of its members - and their overall efficiency (and prospects)? Money came handy. It is uniform, objective, reacts flexibly and immediately to changing circumstances, abstract, easily transformable into tangibles - in short, a perfect barometer of the chances of survival at any given gauging moment. It is through its role as a universal comparative scale - that it came to acquire the might that it possesses.

Money, in other words, had the ultimate information content: the information concerning survival, the information needed for survival. Money measures performance (which allows for survival enhancing feedback). Money confers identity - an effective way to differentiate oneself in a world glutted with information, alienating and assimilating. Money cemented a social system of monovalent rating (a pecking order) - which, in turn, optimized decision making processes through the minimization of the amounts of information needed to affect them. The price of a share traded in the stock exchange, for instance, is assumed (by certain theoreticians) to incorporate (and reflect) all the information available regarding this share. Analogously, we can say that the amount of money that a person has contains sufficient information regarding his or her ability to survive and his or her contribution to the survivability of others. There must be other - possibly more important measures of that - but they are, most probably, lacking: not as uniform as money, not as universal, not as potent, etc.

Money is said to buy us love (or to stand for it, psychologically) - and love is the prerequisite to survival. Very few of us would have survived without some kind of love or attention lavished on us. We are dependent creatures throughout our lives. Thus, in an unavoidable path, as humans move from game to myth and from myth to a derivative social organization - they move ever closer to money and to the information that it contains. Money contains information in different modalities. But it all boils down to the very ancient question of the survival of the fittest.


 


 

Why Do We Love Sports?

The love of - nay, addiction to - competitive and solitary sports cuts across all social-economic strata and throughout all the demographics. Whether as a passive consumer (spectator), a fan, or as a participant and practitioner, everyone enjoys one form of sport or another. Wherefrom this universal propensity?

Sports cater to multiple psychological and physiological deep-set needs. In this they are unique: no other activity responds as do sports to so many dimensions of one's person, both emotional, and physical. But, on a deeper level, sports provide more than instant gratification of primal (or base, depending on one's point of view) instincts, such as the urge to compete and to dominate.

1. Vindication

Sports, both competitive and solitary, are morality plays. The athlete confronts other sportspersons, or nature, or his (her) own limitations. Winning or overcoming these hurdles is interpreted to be the triumph of good over evil, superior over inferior, the best over merely adequate, merit over patronage. It is a vindication of the principles of quotidian-religious morality: efforts are rewarded; determination yields achievement; quality is on top; justice is done.

2. Predictability

The world is riven by seemingly random acts of terror; replete with inane behavior; governed by uncontrollable impulses; and devoid of meaning. Sports are rule-based. Theirs is a predictable universe where umpires largely implement impersonal, yet just principles. Sports is about how the world should have been (and, regrettably, isn't). It is a safe delusion; a comfort zone; a promise and a demonstration that humans are capable of engendering a utopia.

3. Simulation

That is not to say that sports are sterile or irrelevant to our daily lives. On the very contrary. They are an encapsulation and a simulation of Life: they incorporate conflict and drama, teamwork and striving, personal struggle and communal strife, winning and losing. Sports foster learning in a safe environment. Better be defeated in a football match or on the tennis court than lose your life on the battlefield.

The contestants are not the only ones to benefit. From their detached, safe, and isolated perches, observers of sports games, however vicariously, enhance their trove of experiences; learn new skills; encounter manifold situations; augment their coping strategies; and personally grow and develop.

4. Reversibility

In sports, there is always a second chance, often denied us by Life and nature. No loss is permanent and crippling; no defeat is insurmountable and irreversible. Reversal is but a temporary condition, not the antechamber to annihilation. Safe in this certainty, sportsmen and spectators dare, experiment, venture out, and explore. A sense of adventure permeates all sports and, with few exceptions, it is rarely accompanied by impending doom or the exorbitant proverbial price-tag.

5. Belonging

Nothing like sports to encourage a sense of belonging, togetherness, and we-ness. Sports involve teamwork; a meeting of minds; negotiation and bartering; strategic games; bonding; and the narcissism of small differences (when we reserve our most virulent emotions - aggression, hatred, envy - towards those who resemble us the most: the fans of the opposing team, for instance).

Sports, like other addictions, also provide their proponents and participants with an "exo-skeleton": a sense of meaning; a schedule of events; a regime of training; rites, rituals, and ceremonies; uniforms and insignia. It imbues an otherwise chaotic and purposeless life with a sense of mission and with a direction.

6. Narcissistic Gratification (Narcissistic Supply)

It takes years to become a medical doctor and decades to win a prize or award in academe. It requires intelligence, perseverance, and an inordinate amount of effort. One's status as an author or scientist reflects a potent cocktail of natural endowments and hard labour.

It is far less onerous for a sports fan to acquire and claim expertise and thus inspire awe in his listeners and gain the respect of his peers. The fan may be an utter failure in other spheres of life, but he or she can still stake a claim to adulation and admiration by virtue of their fount of sports trivia and narrative skills.

Sports therefore provide a shortcut to accomplishment and its rewards. As most sports are uncomplicated affairs, the barrier to entry is low. Sports are great equalizers: one's status outside the arena, the field, or the court is irrelevant. One's standing is really determined by one's degree of obsession.


 

next:   Form and Malignant Form The Metaphorically Correct Artist and other Romanticist Mutations

APA Reference
Vaknin, S. (2008, January 12). The Madness of Playing Games, HealthyPlace. Retrieved on 2024, October 2 from https://www.healthyplace.com/personality-disorders/malignant-self-love/madness-of-playing

Last Updated: July 4, 2018

Parenting - The Irrational Vocation

The advent of cloning, surrogate motherhood, and the donation of gametes and sperm have shaken the traditional biological definition of parenthood to its foundations. The social roles of parents have similarly been recast by the decline of the nuclear family and the surge of alternative household formats.

Why do people become parents in the first place?

Raising children comprises equal measures of satisfaction and frustration. Parents often employ a psychological defense mechanism - known as "cognitive dissonance" - to suppress the negative aspects of parenting and to deny the unpalatable fact that raising children is time consuming, exhausting, and strains otherwise pleasurable and tranquil relationships to their limits.

Not to mention the fact that the gestational mother experiences "considerable discomfort, effort, and risk in the course of pregnancy and childbirth" (Narayan, U., and J.J. Bartkowiak (1999) Having and Raising Children: Unconventional Families, Hard Choices, and the Social Good University Park, PA: The Pennsylvania State University Press, Quoted in the Stanford Encyclopedia of Philosophy).

Parenting is possibly an irrational vocation, but humanity keeps breeding and procreating. It may well be the call of nature. All living species reproduce and most of them parent. Is maternity (and paternity) proof that, beneath the ephemeral veneer of civilization, we are still merely a kind of beast, subject to the impulses and hard-wired behavior that permeate the rest of the animal kingdom?

In his seminal tome, "The Selfish Gene", Richard Dawkins suggested that we copulate in order to preserve our genetic material by embedding it in the future gene pool. Survival itself - whether in the form of DNA, or, on a higher-level, as a species - determines our parenting instinct. Breeding and nurturing the young are mere safe conduct mechanisms, handing the precious cargo of genetics down generations of "organic containers".

Yet, surely, to ignore the epistemological and emotional realities of parenthood is misleadingly reductionistic. Moreover, Dawkins commits the scientific faux-pas of teleology. Nature has no purpose "in mind", mainly because it has no mind. Things simply are, period. That genes end up being forwarded in time does not entail that Nature (or, for that matter, "God") planned it this way. Arguments from design have long - and convincingly - been refuted by countless philosophers.

Still, human beings do act intentionally. Back to square one: why bring children to the world and burden ourselves with decades of commitment to perfect strangers?

First hypothesis: offspring allow us to "delay" death. Our progeny are the medium through which our genetic material is propagated and immortalized. Additionally, by remembering us, our children "keep us alive" after physical death.

These, of course, are self-delusional, self-serving, illusions..

 

Our genetic material gets diluted with time. While it constitutes 50% of the first generation - it amounts to a measly 6% three generations later. If the everlastingness of one's unadulterated DNA was the paramount concern - incest would have been the norm.

As for one's enduring memory - well, do you recall or can you name your maternal or paternal great great grandfather? Of course you can't. So much for that. Intellectual feats or architectural monuments are far more potent mementos.

Still, we have been so well-indoctrinated that this misconception - that children equal immortality - yields a baby boom in each post war period. Having been existentially threatened, people multiply in the vain belief that they thus best protect their genetic heritage and their memory.

Let's study another explanation.

The utilitarian view is that one's offspring are an asset - kind of pension plan and insurance policy rolled into one. Children are still treated as a yielding property in many parts of the world. They plough fields and do menial jobs very effectively. People "hedge their bets" by bringing multiple copies of themselves to the world. Indeed, as infant mortality plunges - in the better-educated, higher income parts of the world - so does fecundity.

In the Western world, though, children have long ceased to be a profitable proposition. At present, they are more of an economic drag and a liability. Many continue to live with their parents into their thirties and consume the family's savings in college tuition, sumptuous weddings, expensive divorces, and parasitic habits. Alternatively, increasing mobility breaks families apart at an early stage. Either way, children are not longer the founts of emotional sustenance and monetary support they allegedly used to be.

How about this one then:

Procreation serves to preserve the cohesiveness of the family nucleus. It further bonds father to mother and strengthens the ties between siblings. Or is it the other way around and a cohesive and warm family is conductive to reproduction?

Both statements, alas, are false.

 


 


Stable and functional families sport far fewer children than abnormal or dysfunctional ones. Between one third and one half of all children are born in single parent or in other non-traditional, non-nuclear - typically poor and under-educated - households. In such families children are mostly born unwanted and unwelcome - the sad outcomes of accidents and mishaps, wrong fertility planning, lust gone awry and misguided turns of events.

The more sexually active people are and the less safe their desirous exploits - the more they are likely to end up with a bundle of joy (the American saccharine expression for a newborn). Many children are the results of sexual ignorance, bad timing, and a vigorous and undisciplined sexual drive among teenagers, the poor, and the less educated.

Still, there is no denying that most people want their kids and love them. They are attached to them and experience grief and bereavement when they die, depart, or are sick. Most parents find parenthood emotionally fulfilling, happiness-inducing, and highly satisfying. This pertains even to unplanned and initially unwanted new arrivals.

Could this be the missing link? Do fatherhood and motherhood revolve around self-gratification? Does it all boil down to the pleasure principle?

Childrearing may, indeed, be habit forming. Nine months of pregnancy and a host of social positive reinforcements and expectations condition the parents to do the job. Still, a living tot is nothing like the abstract concept. Babies cry, soil themselves and their environment, stink, and severely disrupt the lives of their parents. Nothing too enticing here.

One's spawns are a risky venture. So many things can and do go wrong. So few expectations, wishes, and dreams are realized. So much pain is inflicted on the parents. And then the child runs off and his procreators are left to face the "empty nest". The emotional "returns" on a child are rarely commensurate with the magnitude of the investment.

If you eliminate the impossible, what is left - however improbable - must be the truth. People multiply because it provides them with narcissistic supply.

A Narcissist is a person who projects a (false) image unto others and uses the interest this generates to regulate a labile and grandiose sense of self-worth. The reactions garnered by the narcissist - attention, unconditional acceptance, adulation, admiration, affirmation - are collectively known as "narcissistic supply". The narcissist objectifies people and treats them as mere instruments of gratification.

Infants go through a phase of unbridled fantasy, tyrannical behavior, and perceived omnipotence. An adult narcissist, in other words, is still stuck in his "terrible twos" and is possessed with the emotional maturity of a toddler. To some degree, we are all narcissists. Yet, as we grow, we learn to empathize and to love ourselves and others.

This edifice of maturity is severely tested by newfound parenthood.

Babies evokes in the parent the most primordial drives, protective, animalistic instincts, the desire to merge with the newborn and a sense of terror generated by such a desire (a fear of vanishing and of being assimilated). Neonates engender in their parents an emotional regression.

The parents find themselves revisiting their own childhood even as they are caring for the newborn. The crumbling of decades and layers of personal growth is accompanied by a resurgence of the aforementioned early infancy narcissistic defenses. Parents - especially new ones - are gradually transformed into narcissists by this encounter and find in their children the perfect sources of narcissistic supply, euphemistically known as love. Really it is a form of symbiotic codependence of both parties.

Even the most balanced, most mature, most psychodynamically stable of parents finds such a flood of narcissistic supply irresistible and addictive. It enhances his or her self-confidence, buttresses self esteem, regulates the sense of self-worth, and projects a complimentary image of the parent to himself or herself.

It fast becomes indispensable, especially in the emotionally vulnerable position in which the parent finds herself, with the reawakening and repetition of all the unresolved conflicts that she had with her own parents.

If this theory is true, if breeding is merely about securing prime quality narcissistic supply, then the higher the self confidence, the self esteem, the self worth of the parent, the clearer and more realistic his self image, and the more abundant his other sources of narcissistic supply - the fewer children he will have. These predictions are borne out by reality.

The higher the education and the income of adults - and, consequently, the firmer their sense of self worth - the fewer children they have. Children are perceived as counter-productive: not only is their output (narcissistic supply) redundant, they hinder the parent's professional and pecuniary progress.

The more children people can economically afford - the fewer they have. This gives the lie to the Selfish Gene hypothesis. The more educated they are, the more they know about the world and about themselves, the less they seek to procreate. The more advanced the civilization, the more efforts it invests in preventing the birth of children. Contraceptives, family planning, and abortions are typical of affluent, well informed societies.

The more plentiful the narcissistic supply afforded by other sources - the lesser the emphasis on breeding. Freud described the mechanism of sublimation: the sex drive, the Eros (libido), can be "converted", "sublimated" into other activities. All the sublimatory channels - politics and art, for instance - are narcissistic and yield narcissistic supply. They render children superfluous. Creative people have fewer children than the average or none at all. This is because they are narcissistically self sufficient.


 


The key to our determination to have children is our wish to experience the same unconditional love that we received from our mothers, this intoxicating feeling of being adored without caveats, for what we are, with no limits, reservations, or calculations. This is the most powerful, crystallized form of narcissistic supply. It nourishes our self-love, self worth and self-confidence. It infuses us with feelings of omnipotence and omniscience. In these, and other respects, parenthood is a return to infancy.

Note: Parenting as a Moral Obligation

Do we have a moral obligation to become parents? Some would say: yes. There are three types of arguments to support such a contention:

(i) We owe it to humanity at large to propagate the species or to society to provide manpower for future tasks

(ii) We owe it to ourselves to realize our full potential as human beings and as males or females by becoming parents

(iii) We owe it to our unborn children to give them life.

The first two arguments are easy to dispense with. We have a minimal moral obligation to humanity and society and that is to conduct ourselves so as not to harm others. All other ethical edicts are either derivative or spurious. Similarly, we have a minimal moral obligation to ourselves and that is to be happy (while not harming others). If bringing children to the world makes us happy, all for the better. If we would rather not procreate, it is perfectly within our rights not to do so.

But what about the third argument?

Only living people have rights. There is a debate whether an egg is a living person, but there can be no doubt that it exists. Its rights - whatever they are - derive from the fact that it exists and that it has the potential to develop life. The right to be brought to life (the right to become or to be) pertains to a yet non-alive entity and, therefore, is null and void. Had this right existed, it would have implied an obligation or duty to give life to the unborn and the not yet conceived. No such duty or obligation exist.

Appendix


 

next: The Madness of Playing

APA Reference
Vaknin, S. (2008, January 12). Parenting - The Irrational Vocation, HealthyPlace. Retrieved on 2024, October 2 from https://www.healthyplace.com/personality-disorders/malignant-self-love/parenting-the-irrational-vocation

Last Updated: July 4, 2018

The Happiness of Others

Is there any necessary connection between our actions and the happiness of others? Disregarding for a moment the murkiness of the definitions of "actions" in philosophical literature - two types of answers were hitherto provided.

Sentient Beings (referred to, in this essay, as "Humans" or "persons") seem either to limit each other - or to enhance each other's actions. Mutual limitation is, for instance, evident in game theory. It deals with decision outcomes when all the rational "players" are fully aware of both the outcomes of their actions and of what they prefer these outcomes to be. They are also fully informed about the other players: they know that they are rational, too, for instance. This, of course, is a very farfetched idealization. A state of unbounded information is nowhere and never to be found. Still, in most cases, the players settle down to one of the Nash equilibria solutions. Their actions are constrained by the existence of the others.

The "Hidden Hand" of Adam Smith (which, among other things, benignly and optimally regulates the market and the price mechanisms) - is also a "mutually limiting" model. Numerous single participants strive to maximize their (economic and financial) outcomes - and end up merely optimizing them. The reason lies in the existence of others within the "market". Again, they are constrained by other people's motivations, priorities ands, above all, actions.

All the consequentialist theories of ethics deal with mutual enhancement. This is especially true of the Utilitarian variety. Acts (whether judged individually or in conformity to a set of rules) are moral, if their outcome increases utility (also known as happiness or pleasure). They are morally obligatory if they maximize utility and no alternative course of action can do so. Other versions talk about an "increase" in utility rather than its maximization. Still, the principle is simple: for an act to be judged "moral, ethical, virtuous, or good" - it must influence others in a way which will "enhance" and increase their happiness.

The flaws in all the above answers are evident and have been explored at length in the literature. The assumptions are dubious (fully informed participants, rationality in decision making and in prioritizing the outcomes, etc.). All the answers are instrumental and quantitative: they strive to offer a moral measuring rod. An "increase" entails the measurement of two states: before and after the act. Moreover, it demands full knowledge of the world and a type of knowledge so intimate, so private - that it is not even sure that the players themselves have conscious access to it. Who goes around equipped with an exhaustive list of his priorities and another list of all the possible outcomes of all the acts that he may commit?

But there is another, basic flaw: these answers are descriptive, observational, phenomenological in the restrictive sense of these words. The motives, the drives, the urges, the whole psychological landscape behind the act are deemed irrelevant. The only thing relevant is the increase in utility/happiness. If the latter is achieved - the former might as well not have existed. A computer, which increases happiness is morally equivalent to a person who achieves a quantitatively similar effect. Even worse: two persons acting out of different motives (one malicious and one benevolent) will be judged to be morally equivalent if their acts were to increase happiness similarly.

But, in life, an increase in utility or happiness or pleasure is CONDITIONED upon, is the RESULT of the motives behind the acts that led to it. Put differently: the utility functions of two acts depend decisively on the motivation, drive, or urge behind them. The process, which leads to the act is an inseparable part of the act and of its outcomes, including the outcomes in terms of the subsequent increase in utility or happiness. We can safely distinguish the "utility contaminated" act from the "utility pure (or ideal)" act.

If a person does something which is supposed to increase the overall utility - but does so in order to increase his own utility more than the expected average utility increase - the resulting increase will be lower. The maximum utility increase is achieved overall when the actor forgoes all increase in his personal utility. It seems that there is a constant of utility increase and a conservation law pertaining to it. So that a disproportionate increase in one's personal utility translates into a decrease in the overall average utility. It is not a zero sum game because of the infiniteness of the potential increase - but the rules of distribution of the utility added after the act, seem to dictate an averaging of the increase in order to maximize the result.

The same pitfalls await these observations as did the previous ones. The players must be in the possession of full information at least regarding the motivation of the other players. "Why is he doing this?" and "why did he do what he did?" are not questions confined to the criminal courts. We all want to understand the "why's" of actions long before we engage in utilitarian calculations of increased utility. This also seems to be the source of many an emotional reaction concerning human actions. We are envious because we think that the utility increase was unevenly divided (when adjusted for efforts invested and for the prevailing cultural mores). We suspect outcomes that are "too good to be true". Actually, this very sentence proves my point: that even if something produces an increase in overall happiness it will be considered morally dubious if the motivation behind it remains unclear or seems to be irrational or culturally deviant.

Two types of information are, therefore, always needed: one (discussed above) concerns the motives of the main protagonists, the act-ors. The second type relates to the world. Full knowledge about the world is also a necessity: the causal chains (actions lead to outcomes), what increases the overall utility or happiness and for whom, etc. To assume that all the participants in an interaction possess this tremendous amount of information is an idealization (used also in modern theories of economy), should be regarded as such and not be confused with reality in which people approximate, estimate, extrapolate and evaluate based on a much more limited knowledge.


 


Two examples come to mind:

Aristotle described the "Great Soul". It is a virtuous agent (actor, player) that judges himself to be possessed of a great soul (in a self-referential evaluative disposition). He has the right measure of his worth and he courts the appreciation of his peers (but not of his inferiors) which he believes that he deserves by virtue of being virtuous. He has a dignity of demeanour, which is also very self-conscious. He is, in short, magnanimous (for instance, he forgives his enemies their offences). He seems to be the classical case of a happiness-increasing agent - but he is not. And the reason that he fails in qualifying as such is that his motives are suspect. Does he refrain from assaulting his enemies because of charity and generosity of spirit - or because it is likely to dent his pomposity? It is sufficient that a POSSIBLE different motive exist - to ruin the utilitarian outcome.

Adam Smith, on the other hand, adopted the spectator theory of his teacher Francis Hutcheson. The morally good is a euphemism. It is really the name provided to the pleasure, which a spectator derives from seeing a virtue in action. Smith added that the reason for this emotion is the similarity between the virtue observed in the agent and the virtue possessed by the observer. It is of a moral nature because of the object involved: the agent tries to consciously conform to standards of behaviour which will not harm the innocent, while, simultaneously benefiting himself, his family and his friends. This, in turn, will benefit society as a whole. Such a person is likely to be grateful to his benefactors and sustain the chain of virtue by reciprocating. The chain of good will, thus, endlessly multiply.

Even here, we see that the question of motive and psychology is of utmost importance. WHY is the agent doing what he is doing? Does he really conform to society's standards INTERNALLY? Is he GRATEFUL to his benefactors? Does he WISH to benefit his friends? These are all questions answerable only in the realm of the mind. Really, they are not answerable at all.

 


 

next:   Parenting - The Irrational Vocation

APA Reference
Vaknin, S. (2008, January 11). The Happiness of Others, HealthyPlace. Retrieved on 2024, October 2 from https://www.healthyplace.com/personality-disorders/malignant-self-love/happiness-of-others

Last Updated: July 4, 2018