Key Word(s): git branches, Python, closures, decorators
Topics:
- git branches
- Recap: How does Python really work?
- Nested environments
- Closures
- Decorators
Branching Demo¶
%%bash
cd /tmp
rm -rf cs207_david_sondak #remove if it exists
git clone https://github.com/dsondak/cs207_david_sondak.git
Cloning into 'cs207_david_sondak'...
Once you're in your course repo, you can look at all the branches and print out a lot of information to the screen.
%%bash
cd /tmp/cs207_david_sondak
git branch -avv
* master 2710d50 [origin/master] Stupid .ipynb auto-save. remotes/origin/HEAD -> origin/master remotes/origin/ad-project 6792fc3 Fixed typo in __pow__; included roots out example. remotes/origin/gh-pages 36aa8a1 Shifted schedule around. remotes/origin/lecture4_exercise 06c0fbd Added another test file to demonstrate git features remotes/origin/master 2710d50 Stupid .ipynb auto-save. remotes/origin/tim 4bc23fb Added MathCS207.py
All of these branches are nothing but commit-streams in disguise, as can be seen above. It's a very simple model that leads to a lot of interesting version control patterns.
Since branches are so light-weight, the recommended way of working on software using git
is to create a new branch for each new feature you add, test it out, and if good, merge it into master. Then you deploy the software from master
. We have been using branches under the hood. Let's now lift the hood.
branch
¶
If you run git branch
without having created any branches, it will list only one, called master
. This is the default branch. You have also seen the use of git branch -avv
to show all branches (even remote ones).
It's important to note that this new branch is not active. If you make changes, those changes will still apply to the master
branch, not branch_name
. That is, after executing the git branch branch_name
command you're still on the master
branch and not the branch_name
branch. To change this, you need the next command.
checkout
¶
A note on checkout
¶
Checkout
switches the active branch.
Since branches can have different changes, checkout
may make the working directory look very different.
For instance, if you have added new files to one branch and then check another branch out, those files will no longer show up in the directory. They are still stored in the .git
folder, but since they only exist in the other branch, they cannot be accessed until you check out the original branch.
%%bash
cd /tmp/cs207_david_sondak
git branch lecture4_demos
See what branches we have created.
%%bash
cd /tmp/cs207_david_sondak
git branch
lecture4_demos * master
Notice that you have created the lecture4_demos
branch but you're still on the master
branch.
Jump onto the lecture4_demos
branch.
%%bash
cd /tmp/cs207_david_sondak
git checkout lecture4_demos
git branch
* lecture4_demos master
Switched to branch 'lecture4_demos'
Notice that it is bootstrapped off the master
branch and has the same files. You can check that with the ls
command.
%%bash
cd /tmp/cs207_david_sondak
ls
LICENSE README.md homeworks lectures legacy notes project supplementary-python
Note: You could have created this branch and switched to it all in one go by using
git checkout -b lecture4_demos
Now let's check the status of our repo.
%%bash
cd /tmp/cs207_david_sondak
git status
On branch lecture4_demos nothing to commit, working tree clean
Alright, so we're on our new branch but we haven't added or modified anything yet; there's nothing to commit.
Adding a file on a new branch¶
Let's add a new file. Note that this file gets added on this branch only!
Notice that I'm still using the echo
command. Once again, this is only because jupyter
can't work with text editors. If I were you, I'd use vim
, but you can use whatever text editor you like.
%%bash
cd /tmp/cs207_david_sondak
echo '# Things I wish G.R.R. Martin would say: Finally updating A Song of Ice and Fire.' > books.md
git status
On branch lecture4_demos Untracked files: (use "git add <file>..." to include in what will be committed) books.md nothing added to commit but untracked files present (use "git add" to track)
We add the file to the index, then commit the file to the local repository on the lecture4_demos
branch.
%%bash
cd /tmp/cs207_david_sondak
git add books.md
git status
On branch lecture4_demos Changes to be committed: (use "git reset HEAD <file>..." to unstage) new file: books.md
%%bash
cd /tmp/cs207_david_sondak
git commit -am "Added another test file to demonstrate git features" # Make sure you really understand what the `-am` option does!
[lecture4_demos 7c40664] Added another test file to demonstrate git features 1 file changed, 1 insertion(+) create mode 100644 books.md
%%bash
cd /tmp/cs207_david_sondak
git status
On branch lecture4_demos nothing to commit, working tree clean
At this point, we have committed a new file (books.md
) to our new branch in our local repo. Our remote repo is still not aware of this new file (or branch). In fact, our master
branch is still not really aware of this file.
Note: There are really two options at this point:
- Push the current branch to our upstream repo. This would correspond to a "long-lived" branch. You may want to do this if you have a version of your code that you are maintaining.
- Merge the new branch into the local master branch. Depending on your chosen workflow, this may happen much more frequently than the first option. You'll be creating branches all the time for little bug fixes and features. You don't necessary want such branches to be "long-lived". Once your feature is ready, you'll merge the feature branch into the
master
branch,stage
,commit
, andpush
(all onmaster
). Then you'll delete the "short-lived" feature branch.
We'll continue with the first option for now and discuss the other option later.
Long-lived branches¶
Ok, we have committed. Lets try to push!
%%bash
cd /tmp/cs207_david_sondak
git push
fatal: The current branch lecture4_demos has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream origin lecture4_demos
--------------------------------------------------------------------------- CalledProcessError Traceback (most recent call last) <ipython-input-12-5c7726728118> in <module> ----> 1 get_ipython().run_cell_magic('bash', '', 'cd /tmp/cs207_david_sondak\ngit push\n') ~/opt/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py in run_cell_magic(self, magic_name, line, cell) 2357 with self.builtin_trap: 2358 args = (magic_arg_s, cell) -> 2359 result = fn(*args, **kwargs) 2360 return result 2361 ~/opt/anaconda3/lib/python3.7/site-packages/IPython/core/magics/script.py in named_script_magic(line, cell) 140 else: 141 line = script --> 142 return self.shebang(line, cell) 143 144 # write a basic docstring: </Users/dsondak/opt/anaconda3/lib/python3.7/site-packages/decorator.py:decorator-gen-110> in shebang(self, line, cell) ~/opt/anaconda3/lib/python3.7/site-packages/IPython/core/magic.py in <lambda>(f, *a, **k) 185 # but it's overkill for just that one bit of state. 186 def magic_deco(arg): --> 187 call = lambda f, *a, **k: f(*a, **k) 188 189 if callable(arg): ~/opt/anaconda3/lib/python3.7/site-packages/IPython/core/magics/script.py in shebang(self, line, cell) 243 sys.stderr.flush() 244 if args.raise_error and p.returncode!=0: --> 245 raise CalledProcessError(p.returncode, cell, output=out, stderr=err) 246 247 def _run_script(self, p, cell, to_close): CalledProcessError: Command 'b'cd /tmp/cs207_david_sondak\ngit push\n'' returned non-zero exit status 128.
Fail! Why? Because git
didn't know what to push to on origin
(the name of our remote repo) and didn't want to assume we wanted to call the branch lecture4_demos
on the remote. We need to tell that to git
explicitly (just like it tells us to).
%%bash
cd /tmp/cs207_david_sondak
git push --set-upstream origin lecture4_demos
Branch 'lecture4_demos' set up to track remote branch 'lecture4_demos' from 'origin'.
remote: remote: Create a pull request for 'lecture4_demos' on GitHub by visiting: remote: https://github.com/dsondak/cs207_david_sondak/pull/new/lecture4_demos remote: To https://github.com/dsondak/cs207_david_sondak.git * [new branch] lecture4_demos -> lecture4_demos
Aha, now we have both a remote and a local for lecture4_demos
. We can use the convenient arguments to branch
in order to see the details of all the branches.
%%bash
cd /tmp/cs207_david_sondak
git branch -avv
* lecture4_demos 7c40664 [origin/lecture4_demos] Added another test file to demonstrate git features master 2710d50 [origin/master] Stupid .ipynb auto-save. remotes/origin/HEAD -> origin/master remotes/origin/ad-project 6792fc3 Fixed typo in __pow__; included roots out example. remotes/origin/gh-pages 36aa8a1 Shifted schedule around. remotes/origin/lecture4_demos 7c40664 Added another test file to demonstrate git features remotes/origin/lecture4_exercise 06c0fbd Added another test file to demonstrate git features remotes/origin/master 2710d50 Stupid .ipynb auto-save. remotes/origin/tim 4bc23fb Added MathCS207.py
We make sure we are back on master.
%%bash
cd /tmp/cs207_david_sondak
git checkout master
Your branch is up to date with 'origin/master'.
Switched to branch 'master'
What have we done?
We created a new local branch, created a file on it, created that same branch on our remote repo, and pushed all the changes. Finally, we went back to our master
branch to continue work there.
Git habits¶
Commit early, commit often.
Git is more effective when used at a fine granularity. For starters, you can't undo what you haven't committed, so committing lots of small changes makes it easier to find the right rollback point. Also, merging becomes a lot easier when you only have to deal with a handful of conflicts.
Commit unrelated changes separately.
Identifying the source of a bug or understanding the reason why a particular piece of code exists is much easier when commits focus on related changes. Some of this has to do with simplifying commit messages and making it easier to look through logs, but it has other related benefits: commits are smaller and simpler, and merge conflicts are confined to only the commits which actually have conflicting code.
Do not commit binaries and other temporary files.
Git is meant for tracking changes. In nearly all cases, the only meaningful difference between the contents of two binaries is that they are different. If you change source files, compile, and commit the resulting binary, git sees an entirely different file. The end result is that the git repository (which contains a complete history, remember) begins to become bloated with the history of many dissimilar binaries. Worse, there's often little advantage to keeping those files in the history. An argument can be made for periodically snapshotting working binaries, but things like object files, compiled python files, and editor auto-saves are basically wasted space.
Ignore files which should not be committed
Git comes with a built-in mechanism for ignoring certain types of files. Placing filenames or wildcards in a .gitignore
file placed in the top-level directory (where the .git
directory is also located) will cause git to ignore those files when checking file status. This is a good way to ensure you don't commit the wrong files accidentally, and it also makes the output of git status
somewhat cleaner.
Always make a branch for new changes
While it's tempting to work on new code directly in the master
branch, it's usually a good idea to create a new one instead, especially for team-based projects. The major advantage to this practice is that it keeps logically disparate change sets separate. This means that if two people are working on improvements in two different branches, when they merge, the actual workflow is reflected in the git history. Plus, explicitly creating branches adds some semantic meaning to your branch structure. Moreover, there is very little difference in how you use git.
Write good commit messages
I cannot understate the importance of this.
Seriously. Write good commit messages.
Basic Python¶
Recap So Far¶
It is assumed that you are familiar with the very basics of Python
and especially its syntax. For example, you should have reviewed the supplementary Python
notebooks that go along with this course. They contain, among other things, the following topics:
python
types- Basic data structures including lists, dictionaries, and tuples
- How to write user-defined functions including variable numbers of arguments (i.e. the
*args
and**kwargs
syntax)
for
loops including the indispensibleenumerate
andzip
formats- Proper synax for opening files (i.e. the
with
syntax) - Basic exception handling
- Plotting with
matplotlib
Today, we will fill in a few of the gaps by revealing a bit more about what is going on under the hood.
Preliminaries¶
Before we begin, there is something that you should know about. It's http://pythontutor.com/ and it's a great way to learn what is going on under the hood when you write Python
. You can visualize the Python
code you write: Visualize. I'll have you test out and visualize some very small scripts using that website.
Note: When trying to embed HTML into your notebook, you need to use the syntax: HTML('url')
. pythontutor
has a Generate embed code
button which will generate the necessary code to embed into your webpage.
Reference Variables¶
Let's revisit the most basic Python
. Here, we'll just assign values to some names.
Note that a variable in Python
is called a name. So the assignment statement a = 1
says that the name a
is assigned the integer value 1
. You can call a
a variable too if you like.
from IPython.display import HTML # Allows us to embed HTML into our notebook.
HTML('<iframe width="800" height="400" frameborder="0" src="http://pythontutor.com/iframe-embed.html#code=a%20%3D%20%5B1,%203,%205%5D%0Ab%20%3D%20a%0Aprint%28%22a%20%3D%20%7B0%7D%20and%20has%20id%20%7B1%7D%22.format%28a,%20id%28a%29%29%29%0Aprint%28%22b%20%3D%20%7B0%7D%20and%20has%20id%20%7B1%7D%22.format%28b,%20id%28b%29%29%29%0Aprint%28%22Is%20b%20a%3F%20%7B0%7D%22.format%28b%20is%20a%29%29%0A%0Aa.append%287%29%0Aprint%28%22a%20%3D%20%7B%7D%22.format%28a%29%29%0Aprint%28%22b%20%3D%20%7B%7D%22.format%28b%29%29&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=0&heapPrimitives=false&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false"> </iframe>')
/Users/dsondak/opt/anaconda3/lib/python3.7/site-packages/IPython/core/display.py:694: UserWarning: Consider using IPython.display.IFrame instead warnings.warn("Consider using IPython.display.IFrame instead")
So what is going on? Well, Python
variables are reference variables. You could say "the variable a
(b
) is assigned to a list" rather than "the list is assigned to the variable a
(b
)".
From the Python Language Reference, Section 3.1:
Every object has an identity, a type and a value. An object’s identity never changes once it has been created; you may think of it as the object’s address in memory. The ‘is‘ operator compares the identity of two objects; the id() function returns an integer representing its identity (currently implemented as its address).
From Fluent Python:
Note, if the example on the website doesn't render, here is the code for you to try in pythontutor.com:
a = [1, 3, 5]
b = a
print("a = {0} and has id {1}".format(a, id(a)))
print("b = {0} and has id {1}".format(b, id(b)))
print("Is b a? {0}".format(b is a))
a.append(7)
print("a = {}".format(a))
print("b = {}".format(b))
Python
Types¶
- Every variable in
Python
gets a type (e.g.float
,string
, etc.) - Python is a strongly typed language
- It is also dynamically typed
- Types are assigned at run-time rather than at compile time as in a language like
C
- This makes it slower since the way data is stored cannot be initially optimal
- When the program starts you don't know what that variable will point to.
- Types are assigned at run-time rather than at compile time as in a language like
Here is a discussion from Chapter 11: Further Reading in Fluent Python:
Strong versus weak typing
"If the language rarely performs implicit conversion of types, it’s considered strongly typed; if it often does it, it’s weakly typed. Java, C++, and Python
are strongly typed. PHP, JavaScript, and Perl are weakly typed."
Static versus dynamic typing
"If type-checking is performed at compile time, the language is statically typed; if it happens at runtime, it’s dynamically typed. Static typing requires type declarations (some modern languages use type inference to avoid some of that). Fortran and Lisp are the two oldest programming languages still alive and they use, respectively, static and dynamic typing."
Aside on Compilers¶
We mentioned the word "compile". For our purposes, the meaning of "compile" is to turn human-readable/written code into machine-readable/executable code.
Compiler technology is really amazing!
Many languages require you to compile the program into an executable first before running it.
Other languages, like Python, use a just-in-time (JIT) compiler, which compiles code at run-time.
Some references:
Frames¶
Whenever we use Python Tutor we see two columns. The first column is labeled Frames.
What is a frame?
The evaluation of any expression requires knowledge of the context in which the expression is being evaluated. This context is called a frame. An environment is a sequence of frames, with each frame or context having a bunch of labels, or bindings, associating variables with values.
The sequence starts at the "global" frame, which has bindings for imports, built-ins, etc.
HTML('<iframe width="1000" height="500" frameborder="0" src="http://pythontutor.com/iframe-embed.html#code=a%20%3D%20%5B2,%203,%204%5D%0Ac1%20%3D%202.0**2.0%0Ac2%20%3D%20%5Bi**2.0%20for%20i%20in%20a%5D%0Aprint%28c2%29&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=0&heapPrimitives=false&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false"> </iframe>')
Note, if the example on the website doesn't render, here is the code for you to try in pythontutor.com:
a = [2, 3, 4]
c1 = 2.0**2.0
c2 = [i**2.0 for i in a]
print(c2)
Functions and Environments¶
Functions are first class in Python
. If you don't know what this means, please consult the supplementary Python lecture notes.
HTML('<iframe width="1000" height="500" frameborder="0" src="http://pythontutor.com/iframe-embed.html#code=s%20%3D%20\'The%20lost%20world...\'%0Alen_of_s%20%3D%20len%28s%29%0Amy_len%20%3D%20len%0Amy_len_of_s%20%3D%20my_len%28s%29&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=0&heapPrimitives=false&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false"> </iframe>')
Note, if the example on the website doesn't render, here is the code for you to try in pythontutor.com:
s = 'The lost world...'
len_of_s = len(s)
my_len = len
my_len_of_s = my_len(s)
Defining your own environment¶
When we apply a user defined function to some arguments, something slightly different happens from what we saw in the previous example:
- We bind the names of the arguments in a new local frame
- We evaluate the body of the function in this new frame
HTML('<iframe width="1600" height="500" frameborder="0" src="http://pythontutor.com/iframe-embed.html#code=def%20check_oddness%28x%29%3A%0A%20%20%20%20if%20x%252%20%3D%3D%200%3A%0A%20%20%20%20%20%20%20%20s%20%3D%20%22%7B%7D%20is%20even.%22.format%28x%29%0A%20%20%20%20else%3A%0A%20%20%20%20%20%20%20%20s%20%3D%20%22%7B%7D%20is%20odd%20an%20odd%20one.%22.format%28x%29%0A%20%20%20%20return%20s%0A%20%20%20%20%0Aa%20%3D%206.0%0An1%20%3D%20check_oddness%28a%29%0A%0Ab%20%3D%2015.0%0An2%20%3D%20check_oddness%28b%29&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=0&heapPrimitives=nevernest&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false"> </iframe>')
Note, if the example on the website doesn't render, here is the code for you to try in pythontutor.com:
def check_oddness(x):
if x%2 == 0:
s = "{} is even.".format(x)
else:
s = "{} is odd an odd one.".format(x)
return s
a = 6.0
n1 = check_oddness(a)
b = 15.0
n2 = check_oddness(b)
Model of Evaluation¶
The combination of
- environments
- variables bound to values
- functions
together describes a Model of Evaluation. This model can be used to implement an interpreter for a programming language.
Parameters are passed by sharing in Python
¶
Each formal parameter in a function gets "a copy of the reference". Thus the parameters inside the function arguments become aliases of the actual arguments. You could also say: a function gets a copy of the arguments, but the arguments are always references.
Actually, this discussion can be a bit more nuanced than what we just presented. Here are some more detailed references for those interested:
def f(x):
print(id(x))
Note: id(x)
is the memory address where x is stored.
d = {'a':17.0, 'b':35.0}
print(id(d))
140313827837664
f(d)
140313827837664
There is an object that x
is bound to (it's a dictionary).
f
creates a binding within its scope to the object that x
is bound to.
A few more comments¶
The binding of names (from Python Execution Model Document)¶
"The following constructs bind names: formal parameters to functions, import statements, class and function definitions (these bind the class or function name in the defining block), and targets that are identifiers if occurring in an assignment, for loop header, or after as in a with statement or except clause. The import statement of the form from ... import * binds all names defined in the imported module, except those beginning with an underscore. This form may only be used at the module level."
"If a name is bound in a block, it is a local variable of that block, unless declared as nonlocal or global. If a name is bound at the module level, it is a global variable. (The variables of the module code block are local and global.) If a variable is used in a code block but not defined there, it is a free variable."
The lookup of names¶
A scope defines the visibility of a name within a block. If a local variable is defined in a block, its scope includes that block. If the definition occurs in a function block, the scope extends to any blocks contained within the defining one, unless a contained block introduces a different binding for the name.
When a name is used in a code block, it is resolved using the nearest enclosing scope. The set of all such scopes visible to a code block is called the block’s environment.
import numpy as np
c = 5000.0
def do_integral(function):
c = 13.0
# Some algorithm for carrying out an integration
print(c)
x = np.linspace(-1.0, 1.0, 100)
y = x * x
do_integral(y)
13.0
Towards Intermediate Python¶
- Nested environments
- Closures
- Decorators
Nested Environments¶
You can nest the definitions of functions. When you do this, inner function definitions are not even evaluated until the outer function is called. These inner functions have access to the name bindings in the scope of the outer function.
In the example below, in make_statement()
, both s
and key
will be defined.
def make_statement(s):
def key(k):
c = (s, k)
return c
return key
key_val = make_statement('name: ')
# We have captured the first element of the tuple as a "kind of state"
name = key_val('Albert')
print(name)
('name: ', 'Albert')
name2 = key_val('Emmy')
print(name2)
('name: ', 'Emmy')
In key
, you have access to s
. This sharing is called lexical scoping.
Lexical scoping refers the part of the program ("area of text") where a name-binding is valid.
Here is a more explicit explanation: In the line key_val = make_statement('name: ')
, make_statement()
has returned the inner function key
and the inner function has been given the name key_val
. Now, when we call key_val()
the inner function returns the desired tuple.
The reason this works is that in addition to the environment in which a user-defined function is running, that function has access to a second environment: the environment in which the function was defined. Here, key
has access to the environment of make_statement
. In this sense the environment of make_statement
is the parent of the environment of key
.
This enables two things:
- Names inside the inner functions (or the outer ones for that matter) do not interfere with names in the global scope. Inside the outer and inner functions, the "most lexically local" names are the ones that matter
- An inner function can access the environment of its enclosing (outer) function
Closures¶
Since the inner functions can "capture" information from an outer function's environment, the inner function is sometimes called a closure.
Once s
is captured by the inner function, it cannot be changed: we have lost direct access to its manipulation.
def make_statement(s):
def key(k):
c=(s, k)
return c
return key
This process is called encapsulation, and is a cornerstone of object oriented programming.
Augmenting Functions¶
Since functions are first class, we might want to augment them to put out, for example, call information, time information, etc.
Example 1¶
In the following function, timer()
accepts a function f
as its argument and returns an inner function called inner
.
# First we write our timer function
import time
def timer(f):
def inner(*args):
t0 = time.time()
output = f(*args)
elapsed = time.time() - t0
print("Time Elapsed", elapsed)
return output
return inner
# First we write our timer function
import time
def timer(f):
def inner(*args):
t0 = time.time()
output = f(*args)
elapsed = time.time() - t0
print("Time Elapsed", elapsed)
return output
return inner
inner
accepts a variable argument list and wraps the function f
with timers to time how long it takes f
to execute.
Note that f
is passed a variable argument list (see the supplementary notes).
# Now we prepare to use our timer function
import numpy as np # Import numpy
# User-defined functions
def allocate1(x, N):
return [x]*N
def allocate2(x, N):
return x * np.ones(N)
x = 1.0
N = 2**25
print(N)
# Time allocation with lists
my_alloc = timer(allocate1)
l1 = my_alloc(x, N)
# Time allocation with numpy array
my_alloc2 = timer(allocate2)
l2 = my_alloc2(x, N)
33554432 Time Elapsed 0.4117319583892822 Time Elapsed 1.199855089187622
That seemed pretty useful. We might want to do such things a lot (and not just for timing purposes).
Decorators¶
Let's recap the pattern that was so useful.
Basically, we wrote a nice function to "decorate" our function of interest. In this case, we wrote a timer function whose closure wrapped up any function we gave to it in a timing construct. In order to invoke our nice decorations, we had to pass a function to the timer function and get a new, decorated function back. Then we called the decorated function.
So the idea is as follows. We have a decorator (call it decorator
) that sweetens up some function (call it target
).
def target():
pass
decorated_target = decorator(target)
Python
provides what's called syntactic sugar. We can just write:
@decorator
def target():
pass
Now target
is decorated. Let's see how this all works.
@timer
def allocate1(x, N):
return [x]*N
x = 2.0
N = 2**20
l1 = allocate1(x, N)
Time Elapsed 0.010313034057617188
Very nice! Make sure you understand what happened here. That syntactic sugar hides all of the details.
Example 2¶
We'll just create a demo decorator here.
def decorate(f):
print("Let's decorate!")
d = 1.0
def wrapper(*args):
print("Entering function.")
output = f(*args)
print("Exited function.")
if output > d :
print("My distance is bigger than yours.")
elif output < d:
print("Your distance is bigger than mine.")
else:
print("Our distances are the same size.")
return output
return wrapper
@decorate
def useful_f(a, b, c):
d1 = np.sqrt(a * a + b * b + c * c)
return d1
Let's decorate!
import numpy as np
d = useful_f(1.0, 2.0, 3.0)
Entering function. Exited function. My distance is bigger than yours.
A key thing to remember is that a decorator is run right after the function is defined, not when the function is called. Thus if you had the above decorator code in a module, it would print "Let's decorate!" when importing the module.
Notice that the concept of a closure is used: the state d=1
is captured into the decorated function above.
Back to Breakout Rooms!¶
- Figure out who has the median height in your group. They will be the speaker.
- Think of another useful decorator. Please don't look up common decorators; I want you to think of this on your own.
- In broad strokes, how would you implement this decorator?